🤖 AI Summary
Large Audio-Language Models (LALMs) exhibit significantly weaker multi-step reasoning capabilities in complex acoustic scenes compared to Large Vision-Language Models (LVLMs), primarily due to the scarcity of large-scale, high-quality audio Chain-of-Thought (CoT) data.
Method: We propose the first audio-oriented cross-modal reasoning distillation framework that transfers CoT reasoning capabilities from LVLMs to LALMs using audio-visual question-answering data. Our approach introduces an audio-focused CoT generation and hallucination filtering mechanism, integrated with test-time expansion, audio-anchored verification, and GRPO-enhanced supervised fine-tuning to mitigate the audio CoT data bottleneck.
Contribution/Results: Experiments demonstrate substantial improvements over strong baselines on audio-visual QA tasks, achieving higher reasoning accuracy and enhanced cross-scene generalization. To our knowledge, this is the first work to enable verifiable, vision-to-audio reasoning capability transfer, establishing a foundational methodology for scalable audio reasoning.
📝 Abstract
While large audio-language models (LALMs) have demonstrated state-of-the-art audio understanding, their reasoning capability in complex soundscapes still falls behind large vision-language models (LVLMs). Compared to the visual domain, one bottleneck is the lack of large-scale chain-of-thought audio data to teach LALM stepwise reasoning. To circumvent this data and modality gap, we present SightSound-R1, a cross-modal distillation framework that transfers advanced reasoning from a stronger LVLM teacher to a weaker LALM student on the same audio-visual question answering (AVQA) dataset. SightSound-R1 consists of three core steps: (i) test-time scaling to generate audio-focused chains of thought (CoT) from an LVLM teacher, (ii) audio-grounded validation to filter hallucinations, and (iii) a distillation pipeline with supervised fine-tuning (SFT) followed by Group Relative Policy Optimization (GRPO) for the LALM student. Results show that SightSound-R1 improves LALM reasoning performance both in the in-domain AVQA test set as well as in unseen auditory scenes and questions, outperforming both pretrained and label-only distilled baselines. Thus, we conclude that vision reasoning can be effectively transferred to audio models and scaled with abundant audio-visual data.