SHANKS: Simultaneous Hearing and Thinking for Spoken Language Models

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current speech large models (SLMs) and language large models (LLMs) require complete audio input before initiating inference, resulting in high latency, inability to support real-time interruption, and no前置 tool invocation—hindering low-latency spoken interaction. This paper proposes the first continuous-inference framework for “think-while-listening” spoken interaction, featuring chunked streaming audio input, implicit chain-of-reasoning generation, context-aware dynamic interruption decisions, and immediate tool invocation—mimicking human real-time cognition. On mathematical problem-solving, interruption accuracy improves by 37.1%; in tool-augmented dialogue, 56.9% of tool calls occur before user speech ends. This work achieves, for the first time, reliable reasoning and action execution during incomplete speech input, significantly enhancing the real-time responsiveness and cognitive intelligence of spoken interaction.

Technology Category

Application Category

📝 Abstract
Current large language models (LLMs) and spoken language models (SLMs) begin thinking and taking actions only after the user has finished their turn. This prevents the model from interacting during the user's turn and can lead to high response latency while it waits to think. Consequently, thinking after receiving the full input is not suitable for speech-to-speech interaction, where real-time, low-latency exchange is important. We address this by noting that humans naturally "think while listening." In this paper, we propose SHANKS, a general inference framework that enables SLMs to generate unspoken chain-of-thought reasoning while listening to the user input. SHANKS streams the input speech in fixed-duration chunks and, as soon as a chunk is received, generates unspoken reasoning based on all previous speech and reasoning, while the user continues speaking. SHANKS uses this unspoken reasoning to decide whether to interrupt the user and to make tool calls to complete the task. We demonstrate that SHANKS enhances real-time user-SLM interaction in two scenarios: (1) when the user is presenting a step-by-step solution to a math problem, SHANKS can listen, reason, and interrupt when the user makes a mistake, achieving 37.1% higher interruption accuracy than a baseline that interrupts without thinking; and (2) in a tool-augmented dialogue, SHANKS can complete 56.9% of the tool calls before the user finishes their turn. Overall, SHANKS moves toward models that keep thinking throughout the conversation, not only after a turn ends. Animated illustrations of Shanks can be found at https://d223302.github.io/SHANKS/
Problem

Research questions and friction points this paper is trying to address.

Enables real-time thinking during user speech input
Reduces response latency in spoken language interactions
Generates unspoken reasoning while listening to speech
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enables thinking while listening to user speech
Generates unspoken reasoning during speech input
Interrupts users and calls tools in real-time
🔎 Similar Papers
No similar papers found.
C
Cheng-Han Chiang
1National Taiwan University2Microsoft
X
Xiaofei Wang
2Microsoft
Linjie Li
Linjie Li
Microsoft
Vision and Language
Chung-Ching Lin
Chung-Ching Lin
Microsoft
K
Kevin Lin
2Microsoft
S
Shujie Liu
2Microsoft
Z
Zhendong Wang
2Microsoft
Zhengyuan Yang
Zhengyuan Yang
Principal Researcher, Microsoft
Computer VisionMultimediaMultimodalPost-TrainingAgentic RL
Hung-yi Lee
Hung-yi Lee
National Taiwan University
deep learningspoken language understandingspeech processing
L
Lijuan Wang
2Microsoft