🤖 AI Summary
This study addresses the limited capability of large audio language models in fundamental auditory perception—such as pitch, loudness, and spatial location—where their performance often approaches random guessing and fails to match human superiority in comparative tasks. To systematically evaluate this gap, the authors propose SonicBench, the first psychophysics-based benchmark integrating controllable audio generation, a dual-task paradigm of identification and comparison, linear probing analysis, and controlled experiments. Findings reveal that frozen audio encoders already capture physical auditory cues effectively (accuracy ≥60%), indicating that the primary bottleneck lies not in perceptual encoding but in subsequent alignment and decoding stages. This work provides the first systematic characterization of the boundaries of audio language models in basic auditory perception.
📝 Abstract
Large Audio Language Models (LALMs) excel at semantic and paralinguistic tasks, yet their ability to perceive the fundamental physical attributes of audio such as pitch, loudness, and spatial location remains under-explored. To bridge this gap, we introduce SonicBench, a psychophysically grounded benchmark that systematically evaluates 12 core physical attributes across five perceptual dimensions. Unlike previous datasets, SonicBench uses a controllable generation toolbox to construct stimuli for two complementary paradigms: recognition (absolute judgment) and comparison (relative judgment). This design allows us to probe not only sensory precision but also relational reasoning capabilities, a domain where humans typically exhibit greater proficiency. Our evaluation reveals a substantial deficiency in LALMs'foundational auditory understanding; most models perform near random guessing and, contrary to human patterns, fail to show the expected advantage on comparison tasks. Furthermore, explicit reasoning yields minimal gains. However, our linear probing analysis demonstrates crucially that frozen audio encoders do successfully capture these physical cues (accuracy at least 60%), suggesting that the primary bottleneck lies in the alignment and decoding stages, where models fail to leverage the sensory signals they have already captured.