🤖 AI Summary
Large audio-language models suffer from unstable convergence in speech emotion recognition due to ambiguous emotion boundaries, while smaller models (e.g., 7B) lack sufficient reasoning capacity. To address these issues, we propose a Group-wise Relative Policy Optimization (GRPO) framework integrating emotion similarity-weighted rewards, explicit structured reasoning, and emotion rule constraints. Building upon pre-trained audio-language models, GRPO models fine-grained emotion similarity, guides stepwise reasoning, and employs intra-group relative advantage updates to enhance discrimination and generalization of subtle cross-contextual emotions. Our method achieves state-of-the-art performance on MELD and IEMOCAP, with cross-dataset experiments demonstrating superior robustness and training stability.
📝 Abstract
Although Large Audio-Language Models (LALMs) have exhibited outstanding performance in auditory understanding, their performance in affective computing scenarios, particularly in emotion recognition, reasoning, and subtle sentiment differentiation, remains suboptimal. Recent advances in Reinforcement Learning (RL) have shown promise in improving LALMs' reasoning abilities. However, two critical challenges hinder the direct application of RL techniques to Speech Emotion Recognition (SER) tasks: (1) convergence instability caused by ambiguous emotional boundaries and (2) limited reasoning ability when using relatively small models (e.g., 7B-parameter architectures). To overcome these limitations, we introduce EMO-RL, a novel framework incorporating reinforcement learning with two key innovations: Emotion Similarity-Weighted Reward (ESWR) and Explicit Structured Reasoning (ESR). Built upon pretrained LALMs, our method employs group-relative policy optimization with emotion constraints. Comprehensive experiments demonstrate that our EMO-RL training strategies can significantly enhance the emotional reasoning capabilities of LALMs, attaining state-of-the-art results on both the MELD and IEMOCAP datasets, and cross-dataset experiments prove the strong superiority of generalization.