Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ onnx
results

# Python
__pycache__
__pycache__/
*.py[cod]
*$py.class
*.so
.Python

*.python-version
# Virtual environments
.venv
venv/
Expand Down
1 change: 1 addition & 0 deletions assets
Submodule assets added at beb75e
146 changes: 43 additions & 103 deletions py/README.md
Original file line number Diff line number Diff line change
@@ -1,134 +1,74 @@
# TTS ONNX Inference Examples
# Supertonic — Lightning Fast, On-Device TTS

This guide provides examples for running TTS inference using `example_onnx.py`.
[![Demo](https://img.shields.io/badge/🤗%20Hugging%20Face-Demo-yellow)](https://huggingface.co/spaces/Supertone/supertonic#interactive-demo)
[![Models](https://img.shields.io/badge/🤗%20Hugging%20Face-Models-blue)](https://huggingface.co/Supertone/supertonic)

## 📰 Update News
<p align="center">
<img src="img/Supertonic_IMG_v02_4x.webp" alt="Supertonic Banner">
</p>

**2025.11.23** - Enhanced text preprocessing with comprehensive normalization, emoji removal, symbol replacement, and punctuation handling for improved synthesis quality.
**Supertonic** is a lightning-fast, on-device text-to-speech system designed for **extreme performance** with minimal computational overhead. Powered by ONNX Runtime, it runs entirely on your device—no cloud, no API calls, no privacy concerns.

**2025.11.19** - Added `--speed` parameter to control speech synthesis speed. Adjust the speed factor to make speech faster or slower while maintaining natural quality.
Watch Supertonic running on a **Raspberry Pi**—demonstrating on-device, real-time text-to-speech synthesis:

**2025.11.19** - Added automatic text chunking for long-form inference. Long texts are split into chunks and synthesized with natural pauses.
https://github.com/user-attachments/assets/ea66f6d6-7bc5-4308-8a88-1ce3e07400d2

## Installation
> 🎧 **Try it now**: Experience Supertonic in your browser with our [**Interactive Demo**](https://huggingface.co/spaces/Supertone/supertonic#interactive-demo), or get started with pre-trained models from [**Hugging Face Hub**](https://huggingface.co/Supertone/supertonic)

This project uses [uv](https://docs.astral.sh/uv/) for fast package management.

### Install uv (if not already installed)
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
## Update for python version!
- support for GPU (For Cuda GPU and All GPU with directx 12 support) added!

### Install dependencies
```bash
uv sync
```
## How to run it
first

Or if you prefer using traditional pip with requirements.txt:
```bash
pip install -r requirements.txt
git clone https://github.com/supertone-inc/supertonic.git
cd supertonic
git clone https://huggingface.co/Supertone/supertonic assets
cd py
uv sync
```
and activate the virtual enviornment.

## Basic Usage
then,

### For running the model in CPU

### Example 1: Default Inference
Run inference with default settings:
```bash
uv run example_onnx.py
uv add onnxruntime
```

This will use:
- Voice style: `assets/voice_styles/M1.json`
- Text: "This morning, I took a walk in the park, and the sound of the birds and the breeze was so pleasant that I stopped for a long time just to listen."
- Output directory: `results/`
- Total steps: 5
- Number of generations: 4
### For running in any gpu with Directx 12

### Example 2: Batch Inference
Process multiple voice styles and texts at once:
```bash
uv run example_onnx.py \
--voice-style assets/voice_styles/M1.json assets/voice_styles/F1.json \
--text "The sun sets behind the mountains, painting the sky in shades of pink and orange." "The weather is beautiful and sunny outside. A gentle breeze makes the air feel fresh and pleasant." \
--batch
uv add onnxruntime-directml
```

This will:
- Use `--batch` flag to enable batch processing mode
- Generate speech for 2 different voice-text pairs
- Use male voice style (M1.json) for the first text
- Use female voice style (F1.json) for the second text
- Process both samples in a single batch (automatic text chunking disabled)
### For running in Cuda GPU

### Example 3: High Quality Inference
Increase denoising steps for better quality:
```bash
uv run example_onnx.py \
--total-step 10 \
--voice-style assets/voice_styles/M1.json \
--text "Increasing the number of denoising steps improves the output's fidelity and overall quality."
uv add onnxruntime-gpu
```

This will:
- Use 10 denoising steps instead of the default 5
- Produce higher quality output at the cost of slower inference
### Arguments

### Example 4: Long-Form Inference
For long texts, the system automatically chunks the text into manageable segments and generates a single audio file:
```bash
uv run example_onnx.py \
--voice-style assets/voice_styles/M1.json \
--text "Once upon a time, in a small village nestled between rolling hills, there lived a young artist named Clara. Every morning, she would wake up before dawn to capture the first light of day. The golden rays streaming through her window inspired countless paintings. Her work was known throughout the region for its vibrant colors and emotional depth. People from far and wide came to see her gallery, and many said her paintings could tell stories that words never could."
```
| Argument | Type | Default | Description |
|---|---|---|---|
| `--use-gpu` | flag | `False` | Use GPU for inference (default: CPU). |
| `--onnx-dir` | str | `assets/onnx` | Path to the directory containing the ONNX models. |
| `--total-step`| int | `5` | Number of denoising steps. |
| `--speed` | float | `1.05` | Speech speed. Higher is faster. |
| `--n-test` | int | `4` | Number of times to generate speech for each text. |
| `--batch` | flag | `False` | Enable batch processing for multiple texts and voice styles. |
| `--voice-style` | str(list) | `assets/voice_styles/M1.json` | Path to one or more voice style JSON files. |
| `--text` | str(list) | `"This morning, I took a walk in the park."` | One or more texts to synthesize. |
| `--save-dir` | str | `results` | The directory where the output audio files will be saved. |

This will:
- Automatically split the long text into smaller chunks (max 300 characters by default)
- Process each chunk separately while maintaining natural speech flow
- Insert brief silences (0.3 seconds) between chunks for natural pacing
- Combine all chunks into a single output audio file
### Example

**Note**: When using batch mode (`--batch`), automatic text chunking is disabled. Use non-batch mode for long-form text synthesis.
To synthesize a single sentence with the default voice style:

### Example 5: Adjusting Speech Speed
Control the speed of speech synthesis:
```bash
# Faster speech (speed > 1.0)
uv run example_onnx.py \
--voice-style assets/voice_styles/F2.json \
--text "This text will be synthesized at a faster pace." \
--speed 1.2

# Slower speech (speed < 1.0)
uv run example_onnx.py \
--voice-style assets/voice_styles/M2.json \
--text "This text will be synthesized at a slower, more deliberate pace." \
--speed 0.9
python synthesize.py --use-gpu --text "Hello, this is a test."
```

This will:
- Use `--speed 1.2` to generate faster speech
- Use `--speed 0.9` to generate slower speech
- Default speed is 1.05 if not specified
- Recommended speed range is between 0.9 and 1.5 for natural-sounding results

## Available Arguments

| Argument | Type | Default | Description |
|----------|------|---------|-------------|
| `--use-gpu` | flag | False | Use GPU for inference (with CPU fallback) |
| `--onnx-dir` | str | `assets/onnx` | Path to ONNX model directory |
| `--total-step` | int | 5 | Number of denoising steps (higher = better quality, slower) |
| `--speed` | float | 1.05 | Speech speed factor (higher = faster, lower = slower) |
| `--n-test` | int | 4 | Number of times to generate each sample |
| `--voice-style` | str+ | `assets/voice_styles/M1.json` | Voice style file path(s) |
| `--text` | str+ | (long default text) | Text(s) to synthesize |
| `--save-dir` | str | `results` | Output directory |
| `--batch` | flag | False | Enable batch mode (disables automatic text chunking) |

## Notes

- **Batch Processing**: The number of `--voice-style` files must match the number of `--text` entries
- **Long-Form Inference**: Without `--batch` flag, long texts are automatically chunked and combined into a single audio file with natural pauses
- **Quality vs Speed**: Higher `--total-step` values produce better quality but take longer
- **GPU Support**: GPU mode is not supported yet

1 change: 0 additions & 1 deletion py/assets

This file was deleted.

104 changes: 0 additions & 104 deletions py/example_onnx.py

This file was deleted.

29 changes: 29 additions & 0 deletions py/onnx_utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
import os
from pathlib import Path
import onnxruntime as ort

def get_best_provider(use_gpu: bool = True):
"""
Returns the best available ONNX Runtime execution provider.
Prioritizes CUDA, then DirectML, then CPU.
"""
if not use_gpu:
return ["CPUExecutionProvider"]

available = ort.get_available_providers()
print(f"🔎 Available Providers: {available}")

providers_list = []

if 'CUDAExecutionProvider' in available:
print("✅ Found CUDA! Configuring for NVIDIA GPU...")
providers_list.append('CUDAExecutionProvider')

if 'DmlExecutionProvider' in available:
print("✅ Found DirectML! Configuring for GPU...")
providers_list.append('DmlExecutionProvider')

providers_list.append('CPUExecutionProvider')

print(f"🚀 Selected Providers: {providers_list}")
return providers_list
27 changes: 10 additions & 17 deletions py/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,20 +1,13 @@
[project]
name = "tts-onnx"
version = "1.0.0"
description = "TTS ONNX Inference"
requires-python = ">=3.10"
name = "py"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"onnxruntime==1.23.1",
"numpy>=1.26.0",
"soundfile>=0.12.1",
"librosa>=0.10.0",
"PyYAML>=6.0",
"librosa>=0.11.0",
"numpy>=2.3.5",
"pyyaml>=6.0.3",
"soundfile>=0.13.1",
"torch>=2.9.1",
]

[tool.setuptools]
py-modules = []

[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"

5 changes: 0 additions & 5 deletions py/requirements.txt

This file was deleted.

Loading