🤖 AI Summary
Can audio-language models perform genuinely acoustics-driven deep reasoning? This paper introduces the first audio-language model capable of cross-type reasoning over speech, environmental sounds, and music. We propose Modality-Grounded Reasoning Distillation (MGRD), a novel framework that enforces explicit reliance on raw acoustic representations during inference—via chain-of-thought training and fine-grained audio feature alignment—thereby mitigating hallucination. MGRD is the first method to enable interpretable and verifiable reasoning chains in the audio modality. Experiments demonstrate that our model surpasses Gemini 2.5 Pro and matches Gemini 3 Pro across multiple audio understanding and reasoning benchmarks, validating the strong generalizability and transferability of acoustics-grounded cross-modal reasoning.
📝 Abstract
Recent advances in reasoning models have demonstrated remarkable success in text and vision domains through extended chain-of-thought deliberation. However, a perplexing phenomenon persists in audio language models: they consistently perform better with minimal or no reasoning, raising a fundamental question - can audio intelligence truly benefit from deliberate thinking? We introduce Step-Audio-R1, the first audio reasoning model that successfully unlocks reasoning capabilities in the audio domain. Through our proposed Modality-Grounded Reasoning Distillation (MGRD) framework, Step-Audio-R1 learns to generate audio-relevant reasoning chains that genuinely ground themselves in acoustic features rather than hallucinating disconnected deliberations. Our model exhibits strong audio reasoning capabilities, surpassing Gemini 2.5 Pro and achieving performance comparable to the state-of-the-art Gemini 3 Pro across comprehensive audio understanding and reasoning benchmarks spanning speech, environmental sounds, and music. These results demonstrate that reasoning is a transferable capability across modalities when appropriately anchored, transforming extended deliberation from a liability into a powerful asset for audio intelligence. By establishing the first successful audio reasoning model, Step-Audio-R1 opens new pathways toward building truly multimodal reasoning systems that think deeply across all sensory modalities.