EMO-R3: Reflective Reinforcement Learning for Emotional Reasoning in Multimodal Large Language Models

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models struggle with the complexity and subjectivity of human emotions, exhibiting limited generalization, poor interpretability, and misalignment between reinforcement learning strategies and affective cognition. To address these issues, this work proposes a structured emotional reasoning mechanism that guides the model through step-by-step affective inference. Furthermore, it introduces a reflective emotional reward mechanism that re-evaluates the reasoning process based on image-text consistency and emotional coherence. This approach significantly enhances performance across multiple visual emotion understanding benchmarks while improving the interpretability and self-reflective capacity of emotional reasoning, thereby better aligning with human affective cognitive processes.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have shown remarkable progress in visual reasoning and understanding tasks but still struggle to capture the complexity and subjectivity of human emotions. Existing approaches based on supervised fine-tuning often suffer from limited generalization and poor interpretability, while reinforcement learning methods such as Group Relative Policy Optimization fail to align with the intrinsic characteristics of emotional cognition. To address these challenges, we propose Reflective Reinforcement Learning for Emotional Reasoning (EMO-R3), a framework designed to enhance the emotional reasoning ability of MLLMs. Specifically, we introduce Structured Emotional Thinking to guide the model to perform step-by-step emotional reasoning in a structured and interpretable manner, and design a Reflective Emotional Reward that enables the model to re-evaluate its reasoning based on visual-text consistency and emotional coherence. Extensive experiments demonstrate that EMO-R3 significantly improves both the interpretability and emotional intelligence of MLLMs, achieving superior performance across multiple visual emotional understanding benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
Emotional Reasoning
Emotion Understanding
Visual Emotion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reflective Reinforcement Learning
Emotional Reasoning
Multimodal Large Language Models
Structured Emotional Thinking
Reflective Emotional Reward
🔎 Similar Papers
No similar papers found.
Y
Yiyang Fang
School of Computer Science, Wuhan University
Wenke Huang
Wenke Huang
School of Computer Science, Wuhan University
Federated LearningMLLM
P
Pei Fu
MiLM Plus, Xiaomi Inc.
Y
Yihao Yang
School of Computer Science, Wuhan University
K
Kehua Su
School of Computer Science, Wuhan University
Zhenbo Luo
Zhenbo Luo
XiaoMi
Vision Language ModelComputer Vision
Jian Luan
Jian Luan
Toshiba, Microsoft, Xiaomi
LLMVLMTTSSinging Synthesis
Mang Ye
Mang Ye
Professor, Wuhan University
Multimodal LearningPerson Re-identificationFederated Learning