🤖 AI Summary
Existing multimodal large language models struggle with the complexity and subjectivity of human emotions, exhibiting limited generalization, poor interpretability, and misalignment between reinforcement learning strategies and affective cognition. To address these issues, this work proposes a structured emotional reasoning mechanism that guides the model through step-by-step affective inference. Furthermore, it introduces a reflective emotional reward mechanism that re-evaluates the reasoning process based on image-text consistency and emotional coherence. This approach significantly enhances performance across multiple visual emotion understanding benchmarks while improving the interpretability and self-reflective capacity of emotional reasoning, thereby better aligning with human affective cognitive processes.
📝 Abstract
Multimodal Large Language Models (MLLMs) have shown remarkable progress in visual reasoning and understanding tasks but still struggle to capture the complexity and subjectivity of human emotions. Existing approaches based on supervised fine-tuning often suffer from limited generalization and poor interpretability, while reinforcement learning methods such as Group Relative Policy Optimization fail to align with the intrinsic characteristics of emotional cognition. To address these challenges, we propose Reflective Reinforcement Learning for Emotional Reasoning (EMO-R3), a framework designed to enhance the emotional reasoning ability of MLLMs. Specifically, we introduce Structured Emotional Thinking to guide the model to perform step-by-step emotional reasoning in a structured and interpretable manner, and design a Reflective Emotional Reward that enables the model to re-evaluate its reasoning based on visual-text consistency and emotional coherence. Extensive experiments demonstrate that EMO-R3 significantly improves both the interpretability and emotional intelligence of MLLMs, achieving superior performance across multiple visual emotional understanding benchmarks.