🤖 AI Summary
This work addresses the complexity and poor scalability of existing discrete non-autoregressive text-to-speech (TTS) systems in multilingual zero-shot scenarios. The authors propose a diffusion language model–inspired discrete non-autoregressive architecture that directly maps input text to multi-codebook acoustic tokens. By introducing a full-codebook random masking training strategy and leveraging large language model initialization, the approach substantially simplifies the synthesis pipeline while significantly improving speech intelligibility and cross-lingual generalization. Trained on 581,000 hours of open-source multilingual data, the system supports over 600 languages—achieving the broadest language coverage to date—and demonstrates state-of-the-art performance on English, Chinese, and multilingual benchmarks. The code and pretrained models are publicly released.
📝 Abstract
We present OmniVoice, a massive multilingual zero-shot text-to-speech (TTS) model that scales to over 600 languages. At its core is a novel diffusion language model-style discrete non-autoregressive (NAR) architecture. Unlike conventional discrete NAR models that suffer from performance bottlenecks in complex two-stage (text-to-semantic-to-acoustic) pipelines, OmniVoice directly maps text to multi-codebook acoustic tokens. This simplified approach is facilitated by two key technical innovations: (1) a full-codebook random masking strategy for efficient training, and (2) initialization from a pre-trained LLM to ensure superior intelligibility. By leveraging a 581k-hour multilingual dataset curated entirely from open-source data, OmniVoice achieves the broadest language coverage to date and delivers state-of-the-art performance across Chinese, English, and diverse multilingual benchmarks. Our code and pre-trained models are publicly available at https://github.com/k2-fsa/OmniVoice.