🤖 AI Summary
This study investigates the reliance mechanisms of Large Audio-Language Models (LALMs) on lexical content versus acoustic cues for emotion understanding. Addressing the open question of whether LALMs genuinely “listen” to speech, we introduce LISTEN—the first controlled-variable benchmark—featuring a speech narrative dataset with systematically aligned or conflicting lexical and paralinguistic (e.g., prosody, rhythm) cues, coupled with a multimodal emotion recognition evaluation protocol. Systematic evaluation across six state-of-the-art LALMs reveals that emotion recognition is overwhelmingly driven by text transcriptions, with near-random performance when lexical content is absent or when acoustic and semantic cues conflict. Our work provides the first empirical evidence that current LALMs fundamentally “transcribe rather than listen,” exposing a critical limitation in audio understanding. LISTEN establishes a foundational benchmark and theoretical grounding for developing trustworthy, acoustically grounded audio-language models.
📝 Abstract
Understanding emotion from speech requires sensitivity to both lexical and acoustic cues. However, it remains unclear whether large audio language models (LALMs) genuinely process acoustic information or rely primarily on lexical content. We present LISTEN (Lexical vs. Acoustic Speech Test for Emotion in Narratives), a controlled benchmark designed to disentangle lexical reliance from acoustic sensitivity in emotion understanding. Across evaluations of six state-of-the-art LALMs, we observe a consistent lexical dominance. Models predict "neutral" when lexical cues are neutral or absent, show limited gains under cue alignment, and fail to classify distinct emotions under cue conflict. In paralinguistic settings, performance approaches chance. These results indicate that current LALMs largely "transcribe" rather than "listen," relying heavily on lexical semantics while underutilizing acoustic cues. LISTEN offers a principled framework for assessing emotion understanding in multimodal models.