🤖 AI Summary
Existing autoregressive text-to-speech (TTS) methods rely on single-codebook representations, leading to severe loss of fine-grained acoustic information—such as prosodic nuances and speaker timbre—particularly detrimental in singing and music synthesis. To address this, we propose QTTS, a novel framework built upon QDAC, a residual vector quantization-based audio codec enabling low-distortion, high-fidelity speech reconstruction. QTTS innovatively integrates ASR-guided autoregressive modeling with adversarial GAN training, and introduces a hierarchical parallel architecture coupled with a latency-aware multi-head mechanism to effectively capture cross-codebook dependencies and accelerate inference. Experiments demonstrate that QTTS significantly outperforms state-of-the-art baselines in naturalness, expressiveness, and performance under complex acoustic conditions (e.g., singing synthesis). Notably, QTTS is the first end-to-end autoregressive TTS system to jointly achieve high-fidelity timbre reproduction and precise prosody modeling.
📝 Abstract
Text-to-speech (TTS) synthesis has seen renewed progress under the discrete modeling paradigm. Existing autoregressive approaches often rely on single-codebook representations, which suffer from significant information loss. Even with post-hoc refinement techniques such as flow matching, these methods fail to recover fine-grained details (e.g., prosodic nuances, speaker-specific timbres), especially in challenging scenarios like singing voice or music synthesis. We propose QTTS, a novel TTS framework built upon our new audio codec, QDAC. The core innovation of QDAC lies in its end-to-end training of an ASR-based auto-regressive network with a GAN, which achieves superior semantic feature disentanglement for scalable, near-lossless compression. QTTS models these discrete codes using two innovative strategies: the Hierarchical Parallel architecture, which uses a dual-AR structure to model inter-codebook dependencies for higher-quality synthesis, and the Delay Multihead approach, which employs parallelized prediction with a fixed delay to accelerate inference speed. Our experiments demonstrate that the proposed framework achieves higher synthesis quality and better preserves expressive content compared to baseline. This suggests that scaling up compression via multi-codebook modeling is a promising direction for high-fidelity, general-purpose speech and audio generation.