Listen First, Then Answer: Timestamp-Grounded Speech Reasoning

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing large audio language models, which often generate reasoning chains without explicit alignment to the input audio, leading to hallucinations that diverge from the actual spoken content. To mitigate this, the authors propose a novel reinforcement learning–based approach that introduces, for the first time, a timestamp grounding mechanism to dynamically align each reasoning step with relevant segments of the audio signal. This alignment guides the model to focus on salient regions, thereby enhancing the faithfulness and interpretability of its reasoning process while promoting effective region exploration, auditory verification, and logical consistency. Evaluated across four speech benchmark datasets, the proposed method outperforms both zero-shot reasoning and fine-tuned baselines without timestamp grounding, demonstrating significant improvements in reasoning quality and audio-aware comprehension.

Technology Category

Application Category

📝 Abstract
Large audio-language models (LALMs) can generate reasoning chains for their predictions, but it remains unclear whether these reasoning chains remain grounded in the input audio. In this paper, we propose an RL-based strategy that grounds the reasoning outputs of LALMs with explicit timestamp annotations referring to relevant segments of the audio signal. Our analysis shows that timestamp grounding leads the model to attend more strongly to audio tokens during reasoning generation. Experiments on four speech-based benchmark datasets demonstrate that our approach improves performance compared to both zero-shot reasoning and fine-tuning without timestamp grounding. Additionally, grounding amplifies desirable reasoning behaviors, such as region exploration, audiology verification, and consistency, underscoring the importance of grounding mechanisms for faithful multimodal reasoning.
Problem

Research questions and friction points this paper is trying to address.

audio-language models
reasoning grounding
timestamp annotation
multimodal reasoning
speech reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

timestamp grounding
audio-language models
reinforcement learning
multimodal reasoning
speech reasoning
🔎 Similar Papers
No similar papers found.