WoW-Bench: Evaluating Fine-Grained Acoustic Perception in Audio-Language Models via Marine Mammal Vocalizations

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large audio-language models (LALMs) exhibit fundamental deficiencies in fine-grained auditory perception—e.g., pitch and duration discrimination—despite their strong high-level semantic capabilities. Method: We introduce WoW-Bench, the first benchmark explicitly designed to evaluate both low-level acoustic perception and multi-level cognitive reasoning. Built upon marine mammal vocalizations, it comprises perceptual classification tasks and Bloom’s taxonomy–informed cognitive reasoning tasks, featuring a novel “distractor” question design to rigorously test true auditory grounding—i.e., whether models rely on genuine acoustic features rather than spurious correlations. Contribution/Results: WoW-Bench enables the first dual-dimensional assessment of LALMs’ auditory perception and cognitive reasoning capacities via fine-grained acoustic analysis and hierarchical question construction. Empirical evaluation shows that state-of-the-art LALMs underperform significantly relative to human baselines across all tasks, exposing critical bottlenecks in fine-grained acoustic understanding and cross-modal semantic alignment.

Technology Category

Application Category

📝 Abstract
Large audio language models (LALMs) extend language understanding into the auditory domain, yet their ability to perform low-level listening, such as pitch and duration detection, remains underexplored. However, low-level listening is critical for real-world, out-of-distribution tasks where models must reason about unfamiliar sounds based on fine-grained acoustic cues. To address this gap, we introduce the World-of-Whale benchmark (WoW-Bench) to evaluate low-level auditory perception and cognition using marine mammal vocalizations. WoW-bench is composed of a Perception benchmark for categorizing novel sounds and a Cognition benchmark, inspired by Bloom's taxonomy, to assess the abilities to remember, understand, apply, and analyze sound events. For the Cognition benchmark, we additionally introduce distractor questions to evaluate whether models are truly solving problems through listening rather than relying on other heuristics. Experiments with state-of-the-art LALMs show performance far below human levels, indicating a need for stronger auditory grounding in LALMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating fine-grained acoustic perception in audio-language models
Assessing low-level auditory capabilities using marine mammal vocalizations
Testing models' ability to reason about unfamiliar sounds through listening
Innovation

Methods, ideas, or system contributions that make the work stand out.

Marine mammal vocalizations benchmark for perception
Cognition benchmark with distractor questions design
Evaluating low-level auditory perception in models
🔎 Similar Papers
No similar papers found.