We present VoXtream2, a zero-shot full-stream TTS model with dynamic speaking-rate control that can be updated mid-utterance on the fly.
For audio examples, see our demo page.
Try VoXtream2 in your browser on HuggingFace 🤗 space.
- Dynamic speed control: Distribution matching and Classifier-free guidance allow for a fine-grained speaking rate control, which can be adjusted as the model generates speech.
- Streaming performance: Works 4x times faster than real-time and achieves 74 ms first packet latency in a full-stream on a consumer GPU.
- Translingual capability: Prompt text masking enables support of acoustic prompts in any language.
2026/04/30:- Added a dynamic speaking rate control interactive demo. You can now adjust the speaking rate as the model produces speech in real-time. Run
voxtream-applocally after installing the package or check our HuggingFace space. - Added cache reset in SynkAttention. Fixed the bug that caused the model to generate noise by relying on an invalid prompt cache.
- Added a dynamic speaking rate control interactive demo. You can now adjust the speaking rate as the model produces speech in real-time. Run
2026/04/08: Added a frame repeat counter. Reduces hallucinations caused by models getting stuck in the same frame. Controlled by theframe_repeat_counterparameter inSpeechGeneratorConfig. Recommended value (12-25), lower values for stricter control.2026/03: We released VoXtream2.2026/01: VoXtream is accepted for an oral presentation at ICASSP 2026.2025/09: We released VoXtream. Now available at voxtream branch.
# For Debian-like distribution (e.g. Ubuntu, Mint, etc.)
apt-get install espeak-ng
# For RedHat-like distribution (e.g. CentOS, Fedora, etc.)
yum install espeak-ng
# For MacOS
brew install espeak-ngpip install "voxtream>=0.2.3"- Prompt audio: a file containing 3-10 seconds of the target voice. The maximum supported length is 20 seconds (longer audio will be trimmed).
- Text: What you want the model to say. The maximum supported length is 1000 characters (longer text will be trimmed).
- Speaking rate (optional): target speaking rate in syllables per second.
Notes:
- The model was tested on Ubuntu 22.04, CUDA 12 and PyTorch 2.4.
- The model requires 2.2Gb of VRAM (enabling speech enhancement adds 2Gb).
- Maximum generation length is limited to 1 minute.
- The initial run may take a bit longer to download model weights and warmup model graph.
- If you experience problems with CUDAGraphs please check this issue.
voxtream \
--prompt-audio assets/audio/english_male.wav \
--text "In general, some method is then needed to evaluate each approximation." \
--output "output_stream.wav"voxtream \
--prompt-audio assets/audio/english_female.wav \
--text "Staff do not always do enough to prevent violence." \
--output "full_stream_2sps.wav" \
--full-stream \
--spk-rate 2.0voxtream \
--prompt-audio assets/test/english_male.wav \
--text "In general, however, some method is then needed to evaluate each approximation." \
--output "output_enhanced.wav" \
--prompt-enhancementimport json
from itertools import repeat
from pathlib import Path
import numpy as np
import soundfile as sf
from voxtream.utils.generator import (
set_seed,
text_generator,
)
from voxtream.generator import SpeechGenerator, SpeechGeneratorConfig
set_seed()
with open('configs/generator.json') as f:
config = SpeechGeneratorConfig(**json.load(f))
with open('configs/speaking_rate.json') as f:
spk_rate_config = json.load(f)
speech_generator = SpeechGenerator(config, spk_rate_config)
# Output streaming, no speaking rate control
speech_stream = speech_generator.generate_stream(
prompt_audio_path=Path('assets/audio/english_male.wav'),
text="In general, however, some method is then needed to evaluate each approximation.",
)
audio_frames = [audio_frame for audio_frame, _ in speech_stream]
sf.write('output_stream.wav', np.concatenate(audio_frames), config.mimi_sr)
# Full streaming & fixed speaking rate control (2 syllables per second)
speech_stream = speech_generator.generate_stream(
prompt_audio_path=Path('assets/audio/english_female.wav'),
text=text_generator("Staff do not always do enough to prevent violence."),
speaking_rate=repeat(2.0),
)
audio_frames = [audio_frame for audio_frame, _ in speech_stream]
sf.write('full_stream_2sps.wav', np.concatenate(audio_frames), config.mimi_sr)To start a gradio web-demo run:
voxtream-appTo start a websocket server run:
voxtream-serverTo send a request to the server run:
python voxtream/client.pyIt sends a path to the audio prompt and a text to the server and immediately plays audio from the output stream.
To reproduce evaluation metrics from the paper check evaluation section.
- Build the Docker container. If you have another version of Docker compose installed use
docker compose -f ...instead.
docker-compose -f .devcontainer/docker-compose.yaml build voxtream- Run training using the
train.pyscript. You should specify GPU IDs that will be seen inside the container, ex.GPU_IDS=0,1. Specify the batch size according to your GPU. The default batch size is 64 (tested on H200), batch size 12 fits into RTX3090. The dataset will be downloaded automatically to the HF cache directory. Dataset size is 80Gb. The data will be loaded to RAM during training, make sure you can allocate ~80Gb of RAM per GPU. Results will be stored at the./experimentsdirectory.
Example of running the training using 2 GPUs with batch size 32:
GPU_IDS=0,1 docker-compose -f .devcontainer/docker-compose.yaml run voxtream python voxtream/train.py batch_size=12To prepare a custom training dataset check dataset section.
To evaluate model's real time factor (RTF) and First packet latency (FPL) run voxtream-benchmark. You can compile model for faster inference using --compile flag (note that initial compilation take some time).
| Device | Compiled | FPL, ms | RTF |
|---|---|---|---|
| RTX3090 | 74 | 0.256 | |
| RTX3090 | ✔️ | 63 | 0.173 |
- Add finetuning instructions
The code in this repository is provided under the MIT License.
The Depth Transformer component from SesameAI-CSM is included under the Apache 2.0 License (see LICENSE-APACHE and NOTICE).
The model weights were trained on data licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0). Redistribution of the weights must include proper attribution to the original dataset creators (see ATTRIBUTION.md).
- Mimi: Streaming audio codec from Kyutai
- CSM: Conversation speech model from Sesame
- ReDimNet: Speaker recognition model from IDR&D
- Sidon: Speech enhancemnet model from SaruLab
@inproceedings{torgashov2026voxtream,
title={Vo{X}tream: Full-Stream Text-to-Speech with Extremely Low Latency},
author={Torgashov, Nikita and Henter, Gustav Eje and Skantze, Gabriel},
booktitle={Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
year={2026},
note={to appear},
url={https://arxiv.org/abs/2509.15969}
}
@article{torgashov2026voxtream2,
author = {Torgashov, Nikita and Henter, Gustav Eje and Skantze, Gabriel},
title = {Vo{X}tream2: Full-stream TTS with dynamic speaking rate control},
journal = {arXiv:2603.13518},
year = {2026}
}
Any organization or individual is prohibited from using any technology mentioned in this paper to generate someone's speech without his/her consent, including but not limited to government leaders, political figures, and celebrities. If you do not comply with this item, you could be in violation of copyright laws.