Do Audio LLMs Really LISTEN, or Just Transcribe? Measuring Lexical vs. Acoustic Emotion Cues Reliance

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the reliance mechanisms of Large Audio-Language Models (LALMs) on lexical content versus acoustic cues for emotion understanding. Addressing the open question of whether LALMs genuinely “listen” to speech, we introduce LISTEN—the first controlled-variable benchmark—featuring a speech narrative dataset with systematically aligned or conflicting lexical and paralinguistic (e.g., prosody, rhythm) cues, coupled with a multimodal emotion recognition evaluation protocol. Systematic evaluation across six state-of-the-art LALMs reveals that emotion recognition is overwhelmingly driven by text transcriptions, with near-random performance when lexical content is absent or when acoustic and semantic cues conflict. Our work provides the first empirical evidence that current LALMs fundamentally “transcribe rather than listen,” exposing a critical limitation in audio understanding. LISTEN establishes a foundational benchmark and theoretical grounding for developing trustworthy, acoustically grounded audio-language models.

Technology Category

Application Category

📝 Abstract
Understanding emotion from speech requires sensitivity to both lexical and acoustic cues. However, it remains unclear whether large audio language models (LALMs) genuinely process acoustic information or rely primarily on lexical content. We present LISTEN (Lexical vs. Acoustic Speech Test for Emotion in Narratives), a controlled benchmark designed to disentangle lexical reliance from acoustic sensitivity in emotion understanding. Across evaluations of six state-of-the-art LALMs, we observe a consistent lexical dominance. Models predict "neutral" when lexical cues are neutral or absent, show limited gains under cue alignment, and fail to classify distinct emotions under cue conflict. In paralinguistic settings, performance approaches chance. These results indicate that current LALMs largely "transcribe" rather than "listen," relying heavily on lexical semantics while underutilizing acoustic cues. LISTEN offers a principled framework for assessing emotion understanding in multimodal models.
Problem

Research questions and friction points this paper is trying to address.

Evaluating audio LLMs' reliance on lexical versus acoustic emotion cues
Developing a benchmark to test acoustic sensitivity in emotion understanding
Revealing current models' lexical dominance and acoustic underutilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark disentangles lexical and acoustic emotion cues
Models show lexical dominance over acoustic processing
Framework assesses multimodal emotion understanding capabilities
🔎 Similar Papers
No similar papers found.
J
Jingyi Chen
Department of Linguistics, The Ohio State University, USA
Z
Zhimeng Guo
Department of Information Sciences and Technology, Penn State University, USA
J
Jiyun Chun
Department of Computer Science and Engineering, The Ohio State University, USA
P
Pichao Wang
Amazon, USA
Andrew Perrault
Andrew Perrault
Assistant Professor, Dept. of Computer Science and Engineering
Artificial IntelligenceGame TheoryMachine LearningOptimization
Micha Elsner
Micha Elsner
Assistant Professor of Linguistics, The Ohio State University
computational linguisticsBayesian methodsdiscourse structurelanguage acquisition