🤖 AI Summary
Large language models (LLMs) lack implicit understanding of auditory attributes—such as pitch and loudness—limiting their effectiveness in multimodal interaction. To address this, we introduce the first auditory knowledge evaluation benchmark for text-only environments, the Audio-Reasoning Benchmark, encompassing both basic comparative and contextualized reasoning tasks. We propose AIR-CoT (Auditory-Informed Reasoning via Chain-of-Thought), a novel method that explicitly models and reasons about auditory concepts from pure text inputs—without requiring actual audio signals—through span detection and special-token-guided knowledge injection. AIR-CoT is compatible with both LLMs and multimodal LLMs. Experimental results demonstrate consistent and significant improvements in auditory understanding and reasoning across multiple models, outperforming standard baselines and existing audio-augmented approaches. This work establishes an interpretable and generalizable pathway toward endowing language models with auditory cognition capabilities.
📝 Abstract
Even without directly hearing sounds, humans can effortlessly reason about auditory properties, such as pitch, loudness, or sound-source associations, drawing on auditory commonsense. In contrast, language models often lack this capability, limiting their effectiveness in multimodal interactions. As an initial step to address this gap, we present AuditoryBench++, a comprehensive benchmark for evaluating auditory knowledge and reasoning in text-only settings. The benchmark encompasses tasks that range from basic auditory comparisons to contextually grounded reasoning, enabling fine-grained analysis of how models process and integrate auditory concepts. In addition, we introduce AIR-CoT, a novel auditory imagination reasoning method that generates and integrates auditory information during inference through span detection with special tokens and knowledge injection. Extensive experiments with recent LLMs and Multimodal LLMs demonstrate that AIR-CoT generally outperforms both the off-the-shelf models and those augmented with auditory knowledge. The project page is available at https://auditorybenchpp.github.io.