Prosodic Boundary-Aware Streaming Generation for LLM-Based TTS with Streaming Text Input

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of unnatural prosody and context-unboundedness-induced failures in streaming text-to-speech (TTS) systems, which stem from the lack of lookahead information. The authors propose a prosody-boundary-aware post-training strategy that fine-tunes a pre-trained LLM-TTS model using weakly time-aligned data, enabling it to anticipate content boundaries and stop appropriately with only limited future textual context. During inference, a sliding-window prompting mechanism conveys historical text and speech tokens to maintain bounded context and ensure seamless concatenation. This approach is the first to integrate prosody boundary awareness into streaming TTS post-training, achieving natural pauses without requiring complete sentences. Experiments demonstrate an absolute 66.2% reduction in word error rate (from 71.0% to 4.8%), along with relative improvements of 16.1% and 1.5% in speaker and emotional similarity, respectively, substantially outperforming the CosyVoice-Style baseline.

Technology Category

Application Category

📝 Abstract
Streaming TTS that receives streaming text is essential for interactive systems, yet this scheme faces two major challenges: unnatural prosody due to missing lookahead and long-form collapse due to unbounded context. We propose a prosodic-boundary-aware post-training strategy, adapting a pretrained LLM-based TTS model using weakly time-aligned data. Specifically, the model is adapted to learn early stopping at specified content boundaries when provided with limited future text. During inference, a sliding-window prompt carries forward previous text and speech tokens, ensuring bounded context and seamless concatenation. Evaluations show our method outperforms CosyVoice-Style interleaved baseline in both short and long-form scenarios. In long-text synthesis, especially, it achieves a 66.2% absolute reduction in word error rate (from 71.0% to 4.8%) and increases speaker and emotion similarity by 16.1% and 1.5% relatively, offering a robust solution for streaming TTS with incremental text.
Problem

Research questions and friction points this paper is trying to address.

Streaming TTS
prosody
long-form collapse
lookahead
unbounded context
Innovation

Methods, ideas, or system contributions that make the work stand out.

prosodic boundary-aware
streaming TTS
LLM-based TTS
sliding-window prompting
weakly time-aligned data
🔎 Similar Papers
No similar papers found.
C
Changsong Liu
Nanyang Technological University, Singapore
Tianrui Wang
Tianrui Wang
Tianjin University
Speech Signal Processing
Y
Ye Ni
Southeast University, China
Y
Yizhou Peng
Nanyang Technological University, Singapore
E
Eng Siong Chng
Nanyang Technological University, Singapore