π€ AI Summary
Existing continuous speech representations suffer from poor robustness under distributional shifts and limited controllability. To address this, we propose DiSTARβthe first zero-shot text-to-speech (TTS) framework operating entirely in the discrete residual vector quantization (RVQ) code space. DiSTAR innovatively couples block-level autoregressive modeling with a parallel masked diffusion model, eliminating the need for explicit duration prediction or forced alignment. It enables controllable generation via classifier-free guidance sampling and hierarchical RVQ code inference, supporting dynamic bitrate/computation pruning and multi-strategy decoding. Experiments demonstrate that DiSTAR significantly outperforms state-of-the-art zero-shot TTS methods, achieving superior naturalness, speaker consistency, synthesis robustness, and phonetic/expressive diversity.
π Abstract
Recent attempts to interleave autoregressive (AR) sketchers with diffusion-based refiners over continuous speech representations have shown promise, but they remain brittle under distribution shift and offer limited levers for controllability. We introduce DISTAR, a zero-shot text-to-speech framework that operates entirely in a discrete residual vector quantization (RVQ) code space and tightly couples an AR language model with a masked diffusion model, without forced alignment or a duration predictor. Concretely, DISTAR drafts block-level RVQ tokens with an AR language model and then performs parallel masked-diffusion infilling conditioned on the draft to complete the next block, yielding long-form synthesis with blockwise parallelism while mitigating classic AR exposure bias. The discrete code space affords explicit control at inference: DISTAR produces high-quality audio under both greedy and sample-based decoding using classifier-free guidance, supports trade-offs between robustness and diversity, and enables variable bit-rate and controllable computation via RVQ layer pruning at test time. Extensive experiments and ablations demonstrate that DISTAR surpasses state-of-the-art zero-shot TTS systems in robustness, naturalness, and speaker/style consistency, while maintaining rich output diversity. Audio samples are provided on https://anonymous.4open.science/w/DiSTAR_demo.