🤖 AI Summary
Existing audio large language models (AudioLLMs) excel at semantic tasks such as automatic speech recognition but rely on opaque classification modules for paralinguistic cues like emotion, lacking interpretability. This work reframes speech emotion understanding as an interpretable, generative reasoning task—marking the first such formulation. We propose a novel paradigm grounded in dual-encoder, multi-task AudioLLM architecture, augmented with reasoning-enhanced supervision and task-alternating training. The model jointly predicts emotion categories and generates natural-language explanations that are semantically coherent, evidence-grounded, and faithful to input speech. Evaluated on IEMOCAP and MELD, our approach improves emotion classification accuracy while significantly enhancing explanation coherence, faithfulness, and verifiability—thereby overcoming fundamental limitations of conventional discriminative paradigms.
📝 Abstract
Audio Large Language Models (AudioLLMs) have achieved strong results in semantic tasks like speech recognition and translation, but remain limited in modeling paralinguistic cues such as emotion. Existing approaches often treat emotion understanding as a classification problem, offering little insight into the underlying rationale behind predictions. In this work, we explore emotion reasoning, a strategy that leverages the generative capabilities of AudioLLMs to enhance emotion recognition by producing semantically aligned, evidence-grounded explanations. To support this in multitask AudioLLMs, we introduce a unified framework combining reasoning-augmented data supervision, dual-encoder architecture, and task-alternating training. This approach enables AudioLLMs to effectively learn different tasks while incorporating emotional reasoning. Experiments on IEMOCAP and MELD show that our approach not only improves emotion prediction accuracy but also enhances the coherence and evidential grounding of the generated responses.