๐ค AI Summary
Zero-shot text-to-speech (TTS) suffers from slow inference, repetitive artifacts, and underutilization of discrete speech representations. To address these challenges, we propose the first pure discrete-space flow matching framework, which directly models speech generation in the discrete token spaceโbypassing conventional continuous embeddings. Our method introduces a factorized flow prediction mechanism that decouples prosody and acoustic modeling, and incorporates contextual learning via reference audio through fused text, prosodic, and acoustic features. The architecture employs a discrete encoder-decoder, multi-head conditional modeling, and a lightweight flow prediction network. Experiments demonstrate substantial improvements over state-of-the-art methods in naturalness, prosodic accuracy, speaker similarity, and energy control. Moreover, our model achieves 25.8ร faster inference speed than the SOTA baseline, achieving an unprecedented balance between high fidelity and low latency.
๐ Abstract
Zero-shot Text-to-Speech (TTS) aims to synthesize high-quality speech that mimics the voice of an unseen speaker using only a short reference sample, requiring not only speaker adaptation but also accurate modeling of prosodic attributes. Recent approaches based on language models, diffusion, and flow matching have shown promising results in zero-shot TTS, but still suffer from slow inference and repetition artifacts. Discrete codec representations have been widely adopted for speech synthesis, and recent works have begun to explore diffusion models in purely discrete settings, suggesting the potential of discrete generative modeling for speech synthesis. However, existing flow-matching methods typically embed these discrete tokens into a continuous space and apply continuous flow matching, which may not fully leverage the advantages of discrete representations. To address these challenges, we introduce DiFlow-TTS, which, to the best of our knowledge, is the first model to explore purely Discrete Flow Matching for speech synthesis. DiFlow-TTS explicitly models factorized speech attributes within a compact and unified architecture. It leverages in-context learning by conditioning on textual content, along with prosodic and acoustic attributes extracted from a reference speech, enabling effective attribute cloning in a zero-shot setting. In addition, the model employs a factorized flow prediction mechanism with distinct heads for prosody and acoustic details, allowing it to learn aspect-specific distributions. Experimental results demonstrate that DiFlow-TTS achieves promising performance in several key metrics, including naturalness, prosody, preservation of speaker style, and energy control. It also maintains a compact model size and achieves low-latency inference, generating speech up to 25.8 times faster than the latest existing baselines.