AuditoryBench++: Can Language Models Understand Auditory Knowledge without Hearing?

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) lack implicit understanding of auditory attributes—such as pitch and loudness—limiting their effectiveness in multimodal interaction. To address this, we introduce the first auditory knowledge evaluation benchmark for text-only environments, the Audio-Reasoning Benchmark, encompassing both basic comparative and contextualized reasoning tasks. We propose AIR-CoT (Auditory-Informed Reasoning via Chain-of-Thought), a novel method that explicitly models and reasons about auditory concepts from pure text inputs—without requiring actual audio signals—through span detection and special-token-guided knowledge injection. AIR-CoT is compatible with both LLMs and multimodal LLMs. Experimental results demonstrate consistent and significant improvements in auditory understanding and reasoning across multiple models, outperforming standard baselines and existing audio-augmented approaches. This work establishes an interpretable and generalizable pathway toward endowing language models with auditory cognition capabilities.

Technology Category

Application Category

📝 Abstract
Even without directly hearing sounds, humans can effortlessly reason about auditory properties, such as pitch, loudness, or sound-source associations, drawing on auditory commonsense. In contrast, language models often lack this capability, limiting their effectiveness in multimodal interactions. As an initial step to address this gap, we present AuditoryBench++, a comprehensive benchmark for evaluating auditory knowledge and reasoning in text-only settings. The benchmark encompasses tasks that range from basic auditory comparisons to contextually grounded reasoning, enabling fine-grained analysis of how models process and integrate auditory concepts. In addition, we introduce AIR-CoT, a novel auditory imagination reasoning method that generates and integrates auditory information during inference through span detection with special tokens and knowledge injection. Extensive experiments with recent LLMs and Multimodal LLMs demonstrate that AIR-CoT generally outperforms both the off-the-shelf models and those augmented with auditory knowledge. The project page is available at https://auditorybenchpp.github.io.
Problem

Research questions and friction points this paper is trying to address.

Evaluating language models' auditory knowledge without direct sound input
Addressing limitations in multimodal interactions due to missing auditory reasoning
Developing methods to enhance auditory imagination and reasoning capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

AuditoryBench++ benchmark for text-only auditory evaluation
AIR-CoT method with span detection and knowledge injection
Auditory imagination reasoning without direct sound input
🔎 Similar Papers
No similar papers found.
H
Hyunjong Ok
Pohang University of Science and Technology, South Korea
S
Suho Yoo
HJ AILAB
Hyeonjun Kim
Hyeonjun Kim
Korea Military Academy Weapon System Engineering
Multi-agent systemReinforcement LearningRoboticsM&S
J
Jaeho Lee
Pohang University of Science and Technology, South Korea