LTS-VoiceAgent: A Listen-Think-Speak Framework for Efficient Streaming Voice Interaction via Semantic Triggering and Incremental Reasoning

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing spoken interaction systems struggle to simultaneously achieve low latency and semantic coherence, constrained either by the high latency of cascaded architectures or the limited reasoning capabilities of end-to-end models. This work proposes LTS-VoiceAgent, a novel framework that decouples “when to think” from “how to reason incrementally.” It introduces a dynamic semantic trigger to identify meaningful speech prefixes and employs a dual-role streaming coordination mechanism—comprising a background Thinker and a foreground Speaker—to enable concurrent listening, reasoning, and speaking. This design effectively mitigates semantic fragmentation and redundant computation, facilitating fluent, streaming interactions. Evaluated on VERA, Spoken-MQA, BigBenchAudio, and a newly curated Pause-and-Repair benchmark, LTS-VoiceAgent significantly outperforms existing cascaded and streaming approaches, achieving a superior trade-off among accuracy, latency, and computational efficiency.

Technology Category

Application Category

📝 Abstract
Real-time voice agents face a dilemma: end-to-end models often lack deep reasoning, while cascaded pipelines incur high latency by executing ASR, LLM reasoning, and TTS strictly in sequence, unlike human conversation where listeners often start thinking before the speaker finishes. Since cascaded architectures remain the dominant choice for complex tasks, existing cascaded streaming strategies attempt to reduce this latency via mechanical segmentation (e.g., fixed chunks, VAD-based splitting) or speculative generation, but they frequently either break semantic units or waste computation on predictions that must be rolled back. To address these challenges, we propose LTS-VoiceAgent, a Listen-Think-Speak framework that explicitly separates when to think from how to reason incrementally. It features a Dynamic Semantic Trigger to detect meaningful prefixes, and a Dual-Role Stream Orchestrator that coordinates a background Thinker (for state maintenance) and a foreground Speaker (for speculative solving). This parallel design enables"thinking while speaking"without blocking responses. We also introduce a Pause-and-Repair benchmark containing natural disfluencies to stress-test streaming robustness. Experiments across VERA, Spoken-MQA, BigBenchAudio, and our benchmark show that LTS-VoiceAgent achieves a stronger accuracy-latency-efficiency trade-off than serial cascaded baselines and existing streaming strategies.
Problem

Research questions and friction points this paper is trying to address.

streaming voice interaction
latency
semantic units
cascaded pipeline
real-time reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Listen-Think-Speak
Semantic Triggering
Incremental Reasoning
Streaming Voice Agent
Dual-Role Orchestrator
🔎 Similar Papers
No similar papers found.
W
Wenhao Zou
Meituan, University of Chinese Academy of Sciences
Yuwei Miao
Yuwei Miao
PhD student, University of Texas at Arlington
Zhanyu Ma
Zhanyu Ma
Beijing University of Posts and Telecommunications
Pattern RecognitionMachine LearningComputer VisionMultimedia TechnologyDeep Learning
J
Jun Xu
Meituan
J
Jiuchong Gao
Meituan
J
Jinghua Hao
Meituan
R
Renqing He
Meituan
J
Jingwen Xu
Meituan