🤖 AI Summary
It remains unclear whether existing audio-multimodal large language models genuinely leverage acoustic signals. To address this, this work introduces the DEAF benchmark, comprising over 2,700 samples where acoustic cues—pertaining to emotional prosody, background sounds, and speaker identity—conflict with textual semantics. We propose a multi-level evaluation framework that disentangles content bias from prompt-induced behavior by exploiting semantic conflicts and misleading prompts, thereby enabling the first systematic separation of model reliance on textual versus acoustic cues. Through a controlled multi-tiered conflict paradigm and a novel metric for acoustic fidelity, our evaluation of seven leading models reveals a pervasive “text-dominance” phenomenon: despite sensitivity to acoustic variations, predictions remain predominantly driven by text, exposing a significant gap between high performance on standard speech benchmarks and genuine acoustic understanding.
📝 Abstract
Recent Audio Multimodal Large Language Models (Audio MLLMs) demonstrate impressive performance on speech benchmarks, yet it remains unclear whether these models genuinely process acoustic signals or rely on text-based semantic inference. To systematically study this question, we introduce DEAF (Diagnostic Evaluation of Acoustic Faithfulness), a benchmark of over 2,700 conflict stimuli spanning three acoustic dimensions: emotional prosody, background sounds, and speaker identity. Then, we design a controlled multi-level evaluation framework that progressively increases textual influence, ranging from semantic conflicts in the content to misleading prompts and their combination, allowing us to disentangle content-driven bias from prompt-induced sycophancy. We further introduce diagnostic metrics to quantify model reliance on textual cues over acoustic signals. Our evaluation of seven Audio MLLMs reveals a consistent pattern of text dominance: models are sensitive to acoustic variations, yet predictions are predominantly driven by textual inputs, revealing a gap between high performance on standard speech benchmarks and genuine acoustic understanding.