🤖 AI Summary
Existing zero-shot streaming TTS methods rely on future-text lookahead, incurring high latency. This paper proposes SMLLE, the first framework integrating real-time semantic modeling via Transducer with fully autoregressive streaming spectral reconstruction. It introduces semantic tokenization, duration-aligned learning, and a low-latency DeleteMechanism for dynamic frame-wise token deletion—enabling truly lookahead-free, high-fidelity frame-by-frame speech synthesis. SMLLE matches the naturalness of non-streaming sentence-level TTS while reducing end-to-end latency by over 40%, significantly enhancing practical streaming usability. Key contributions are: (1) a real-time co-architectural design that decouples semantic modeling from acoustic generation; (2) fine-grained temporal alignment and spectral reconstruction without future text; and (3) a controllable, stable, and computationally efficient streaming output scheduler via DeleteMechanism.
📝 Abstract
Zero-shot streaming text-to-speech is an important research topic in human-computer interaction. Existing methods primarily use a lookahead mechanism, relying on future text to achieve natural streaming speech synthesis, which introduces high processing latency. To address this issue, we propose SMLLE, a streaming framework for generating high-quality speech frame-by-frame. SMLLE employs a Transducer to convert text into semantic tokens in real time while simultaneously obtaining duration alignment information. The combined outputs are then fed into a fully autoregressive (AR) streaming model to reconstruct mel-spectrograms. To further stabilize the generation process, we design a DeleteMechanism that allows the AR model to access future text introducing as minimal delay as possible. Experimental results suggest that the SMLLE outperforms current streaming TTS methods and achieves comparable performance over sentence-level TTS systems. Samples are available on https://anonymous.4open.science/w/demo_page-48B7/.