SonicBench: Dissecting the Physical Perception Bottleneck in Large Audio Language Models

📅 2026-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited capability of large audio language models in fundamental auditory perception—such as pitch, loudness, and spatial location—where their performance often approaches random guessing and fails to match human superiority in comparative tasks. To systematically evaluate this gap, the authors propose SonicBench, the first psychophysics-based benchmark integrating controllable audio generation, a dual-task paradigm of identification and comparison, linear probing analysis, and controlled experiments. Findings reveal that frozen audio encoders already capture physical auditory cues effectively (accuracy ≥60%), indicating that the primary bottleneck lies not in perceptual encoding but in subsequent alignment and decoding stages. This work provides the first systematic characterization of the boundaries of audio language models in basic auditory perception.

Technology Category

Application Category

📝 Abstract
Large Audio Language Models (LALMs) excel at semantic and paralinguistic tasks, yet their ability to perceive the fundamental physical attributes of audio such as pitch, loudness, and spatial location remains under-explored. To bridge this gap, we introduce SonicBench, a psychophysically grounded benchmark that systematically evaluates 12 core physical attributes across five perceptual dimensions. Unlike previous datasets, SonicBench uses a controllable generation toolbox to construct stimuli for two complementary paradigms: recognition (absolute judgment) and comparison (relative judgment). This design allows us to probe not only sensory precision but also relational reasoning capabilities, a domain where humans typically exhibit greater proficiency. Our evaluation reveals a substantial deficiency in LALMs'foundational auditory understanding; most models perform near random guessing and, contrary to human patterns, fail to show the expected advantage on comparison tasks. Furthermore, explicit reasoning yields minimal gains. However, our linear probing analysis demonstrates crucially that frozen audio encoders do successfully capture these physical cues (accuracy at least 60%), suggesting that the primary bottleneck lies in the alignment and decoding stages, where models fail to leverage the sensory signals they have already captured.
Problem

Research questions and friction points this paper is trying to address.

Large Audio Language Models
physical perception
auditory understanding
sensory signals
perceptual dimensions
Innovation

Methods, ideas, or system contributions that make the work stand out.

SonicBench
Large Audio Language Models
physical audio perception
psychophysical benchmark
linear probing
🔎 Similar Papers
No similar papers found.
Y
Yirong Sun
Ningbo Key Laboratory of Spatial Intelligence and Digital Derivative, Institute of Digital Twin, EIT
Yanjun Chen
Yanjun Chen
University of Illinois Urbana-Champaign
Human Computer InteractionHaptics
Xin Qiu
Xin Qiu
Cognizant AI Labs
Neural Architecture SearchUncertainty QuantificationEvolutionary Computation
Gang Zhang
Gang Zhang
Tsinghua University
computer vision
H
Hongyu Chen
Ningbo Key Laboratory of Spatial Intelligence and Digital Derivative, Institute of Digital Twin, EIT
D
Daokuan Wu
Ningbo Key Laboratory of Spatial Intelligence and Digital Derivative, Institute of Digital Twin, EIT
C
Chengming Li
Shenzhen MSU-BIT University
Min Yang
Min Yang
Bytedance
Vision Language ModelComputer VisionVideo Understanding
D
Dawei Zhu
Amazon AGI
Wei Zhang
Wei Zhang
College of Information Science and Technology, Eastern Institute of Technology, Ningbo, China.
reinforcement learningmotion planninghumanoid robotintelligent fault diagnosis
Xiaoyu Shen
Xiaoyu Shen
Eastern Institute of Technology, Ningbo
language modelmulti-modal learningreasoning