Beyond Classification: Towards Speech Emotion Reasoning with Multitask AudioLLMs

📅 2025-06-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio large language models (AudioLLMs) excel at semantic tasks such as automatic speech recognition but rely on opaque classification modules for paralinguistic cues like emotion, lacking interpretability. This work reframes speech emotion understanding as an interpretable, generative reasoning task—marking the first such formulation. We propose a novel paradigm grounded in dual-encoder, multi-task AudioLLM architecture, augmented with reasoning-enhanced supervision and task-alternating training. The model jointly predicts emotion categories and generates natural-language explanations that are semantically coherent, evidence-grounded, and faithful to input speech. Evaluated on IEMOCAP and MELD, our approach improves emotion classification accuracy while significantly enhancing explanation coherence, faithfulness, and verifiability—thereby overcoming fundamental limitations of conventional discriminative paradigms.

Technology Category

Application Category

📝 Abstract
Audio Large Language Models (AudioLLMs) have achieved strong results in semantic tasks like speech recognition and translation, but remain limited in modeling paralinguistic cues such as emotion. Existing approaches often treat emotion understanding as a classification problem, offering little insight into the underlying rationale behind predictions. In this work, we explore emotion reasoning, a strategy that leverages the generative capabilities of AudioLLMs to enhance emotion recognition by producing semantically aligned, evidence-grounded explanations. To support this in multitask AudioLLMs, we introduce a unified framework combining reasoning-augmented data supervision, dual-encoder architecture, and task-alternating training. This approach enables AudioLLMs to effectively learn different tasks while incorporating emotional reasoning. Experiments on IEMOCAP and MELD show that our approach not only improves emotion prediction accuracy but also enhances the coherence and evidential grounding of the generated responses.
Problem

Research questions and friction points this paper is trying to address.

Enhancing emotion recognition via generative AudioLLMs
Moving beyond classification to emotion reasoning
Improving prediction accuracy and response coherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages generative AudioLLMs for emotion reasoning
Unified framework with reasoning-augmented data supervision
Dual-encoder architecture with task-alternating training
🔎 Similar Papers
No similar papers found.
W
Wenyu Zhang
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR)
Y
Yingxu He
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR)
Geyu Lin
Geyu Lin
Research Engineer, I2R, A*STAR
Generative AINLPSpeech
Zhuohan Liu
Zhuohan Liu
Research Engineer
Shuo Sun
Shuo Sun
Johns Hopkins University
B
Bin Wang
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR)
X
Xunlong Zou
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR)
J
Jeremy H. M. Wong
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR)
Qiongqiong Wang
Qiongqiong Wang
Lead Research Engineer, Institute for Infocomm Research, A*STAR, Singapore
Deep LearningArtificial IntelligenceMachine Learning
H
Hardik B. Sailor
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR)
Nancy F. Chen
Nancy F. Chen
ISCA Fellow, AAIA Fellow, Multimodal Generative AI Group Leader, AI for Education Head at A*STAR
Agentic AILarge Language ModelsConversational AI
AiTi Aw
AiTi Aw
Aw Ai Ti