🤖 AI Summary
Current large audio-language models (LALMs) exhibit fundamental deficiencies in fine-grained auditory perception—e.g., pitch and duration discrimination—despite their strong high-level semantic capabilities.
Method: We introduce WoW-Bench, the first benchmark explicitly designed to evaluate both low-level acoustic perception and multi-level cognitive reasoning. Built upon marine mammal vocalizations, it comprises perceptual classification tasks and Bloom’s taxonomy–informed cognitive reasoning tasks, featuring a novel “distractor” question design to rigorously test true auditory grounding—i.e., whether models rely on genuine acoustic features rather than spurious correlations.
Contribution/Results: WoW-Bench enables the first dual-dimensional assessment of LALMs’ auditory perception and cognitive reasoning capacities via fine-grained acoustic analysis and hierarchical question construction. Empirical evaluation shows that state-of-the-art LALMs underperform significantly relative to human baselines across all tasks, exposing critical bottlenecks in fine-grained acoustic understanding and cross-modal semantic alignment.
📝 Abstract
Large audio language models (LALMs) extend language understanding into the auditory domain, yet their ability to perform low-level listening, such as pitch and duration detection, remains underexplored. However, low-level listening is critical for real-world, out-of-distribution tasks where models must reason about unfamiliar sounds based on fine-grained acoustic cues. To address this gap, we introduce the World-of-Whale benchmark (WoW-Bench) to evaluate low-level auditory perception and cognition using marine mammal vocalizations. WoW-bench is composed of a Perception benchmark for categorizing novel sounds and a Cognition benchmark, inspired by Bloom's taxonomy, to assess the abilities to remember, understand, apply, and analyze sound events. For the Cognition benchmark, we additionally introduce distractor questions to evaluate whether models are truly solving problems through listening rather than relying on other heuristics. Experiments with state-of-the-art LALMs show performance far below human levels, indicating a need for stronger auditory grounding in LALMs.