🤖 AI Summary
This work addresses the high latency inherent in traditional cascaded spoken dialogue systems, which stems from the sequential ASR→LLM→TTS pipeline and impedes human-like real-time interaction. To overcome this limitation, the authors propose DDTSR, a novel framework that introduces, for the first time, a discourse-aware dual-track streaming response mechanism enabling “listen-think-speak” concurrency. By leveraging a small model to generate discourse connectives that guide the large language model’s parallel reasoning, DDTSR dynamically overlaps the ASR, LLM, and TTS processes. The framework further incorporates curriculum learning to enhance discourse coherence and adopts a plug-in architecture compatible with diverse LLMs and utterance lengths. Evaluated on two spoken dialogue benchmarks, DDTSR reduces response latency by 19%–51% while maintaining high-quality output.
📝 Abstract
Achieving human-like responsiveness is a critical yet challenging goal for cascaded spoken dialogue systems. Conventional ASR-LLM-TTS pipelines follow a strictly sequential paradigm, requiring complete transcription and full reasoning before speech synthesis can begin, which results in high response latency. We propose the Discourse-Aware Dual-Track Streaming Response (DDTSR) framework, a low-latency architecture that enables listen-while-thinking and speak-while-thinking. DDTSR is built upon three key mechanisms: (1) connective-guided small-large model synergy, where an auxiliary small model generates minimal-committal discourse connectives while a large model performs knowledge-intensive reasoning in parallel; (2) streaming-based cross-modal collaboration, which dynamically overlaps ASR, LLM inference, and TTS to advance the earliest speakable moment; and (3) curriculum-learning-based discourse continuity enhancement, which maintains coherence and logical consistency between early responses and subsequent reasoning outputs. Experiments on two spoken dialogue benchmarks demonstrate that DDTSR reduces response latency by 19%-51% while preserving discourse quality. Further analysis shows that DDTSR functions as a plug-and-play module compatible with diverse LLM backbones, and remains robust across varying utterance lengths, indicating strong practicality and scalability for real-time spoken interaction.