Beyond Emotion Recognition: A Multi-Turn Multimodal Emotion Understanding and Reasoning Benchmark

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) in psychological applications predominantly focus on emotion recognition, lacking systematic modeling of higher-order emotional reasoning—such as causal attribution and behavioral prediction—thus limiting natural human-AI interaction. Method: We introduce EmoReason, the first multi-round, multimodal benchmark for deep emotional understanding, comprising 1,451 real-world videos and 5,101 progressive questions. We propose a multi-agent collaborative framework wherein specialized agents model contextual background, interpersonal relationships, and event-level details to enable fine-grained, interpretable emotional reasoning. Contribution/Results: Experiments reveal severe performance limitations of state-of-the-art MLLMs on EmoReason, confirming its challenge and validity. Our method achieves significant improvements across emotion recognition, causal attribution, and behavioral prediction—establishing the first effective capability for multimodal emotional reasoning and offering a novel paradigm for embodied intelligence and affective computing.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have been widely applied across various fields due to their powerful perceptual and reasoning capabilities. In the realm of psychology, these models hold promise for a deeper understanding of human emotions and behaviors. However, recent research primarily focuses on enhancing their emotion recognition abilities, leaving the substantial potential in emotion reasoning, which is crucial for improving the naturalness and effectiveness of human-machine interactions. Therefore, in this paper, we introduce a multi-turn multimodal emotion understanding and reasoning (MTMEUR) benchmark, which encompasses 1,451 video data from real-life scenarios, along with 5,101 progressive questions. These questions cover various aspects, including emotion recognition, potential causes of emotions, future action prediction, etc. Besides, we propose a multi-agent framework, where each agent specializes in a specific aspect, such as background context, character dynamics, and event details, to improve the system's reasoning capabilities. Furthermore, we conduct experiments with existing MLLMs and our agent-based method on the proposed benchmark, revealing that most models face significant challenges with this task.
Problem

Research questions and friction points this paper is trying to address.

Advancing emotion reasoning beyond recognition for human-machine interaction
Addressing multi-turn multimodal emotion understanding in real-life scenarios
Overcoming challenges in emotion cause analysis and future action prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework for specialized reasoning
Multi-turn multimodal emotion understanding benchmark
Progressive questions covering emotion recognition and prediction
🔎 Similar Papers
No similar papers found.
Jinpeng Hu
Jinpeng Hu
Hefei University of Technology
natural language processingnamed entity recognitionsummarization
H
Hongchang Shi
Hefei University of Technology, Hefei, China
C
Chongyuan Dai
Hefei University of Technology, Hefei, China
Z
Zhuo Li
The Chinese University of Hong Kong, Shenzhen, Shenzhen, China
Peipei Song
Peipei Song
University of Science and Technology of China
MultimediaComputer VisionMachine Learning
M
Meng Wang
Hefei University of Technology, Institute of Artificial Intelligence (IAI), Hefei Comprehensive National Science Center, Hefei, China