SpeakStream: Streaming Text-to-Speech with Interleaved Data

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional TTS systems require complete sentence input, resulting in high initial-word latency when cascaded with streaming LLMs—severely limiting real-time responsiveness in conversational AI. To address this, we propose the first end-to-end streaming TTS framework based on a decoder-only Transformer architecture. Our method introduces interleaved text–speech modeling and a next-token prediction loss, enabling truly incremental speech synthesis; it dynamically accepts text chunks and generates speech synchronously. Experiments demonstrate that our approach achieves state-of-the-art initial-word latency (<300 ms) while matching the naturalness of non-streaming TTS systems (MOS ≈ 4.2). This significantly improves both response efficiency and interaction fluency in cascaded LLM–TTS dialogue systems.

Technology Category

Application Category

📝 Abstract
The latency bottleneck of traditional text-to-speech (TTS) systems fundamentally hinders the potential of streaming large language models (LLMs) in conversational AI. These TTS systems, typically trained and inferenced on complete utterances, introduce unacceptable delays, even with optimized inference speeds, when coupled with streaming LLM outputs. This is particularly problematic for creating responsive conversational agents where low first-token latency is critical. In this paper, we present SpeakStream, a streaming TTS system that generates audio incrementally from streaming text using a decoder-only architecture. SpeakStream is trained using a next-step prediction loss on interleaved text-speech data. During inference, it generates speech incrementally while absorbing streaming input text, making it particularly suitable for cascaded conversational AI agents where an LLM streams text to a TTS system. Our experiments demonstrate that SpeakStream achieves state-of-the-art latency results in terms of first-token latency while maintaining the quality of non-streaming TTS systems.
Problem

Research questions and friction points this paper is trying to address.

Reducing latency in streaming text-to-speech for conversational AI
Enabling incremental audio generation from streaming text inputs
Maintaining speech quality while minimizing first-token latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Streaming TTS with interleaved data training
Decoder-only architecture for incremental generation
Low first-token latency maintains quality
🔎 Similar Papers
No similar papers found.