🤖 AI Summary
Existing multimodal large language models (MLLMs) in psychological applications predominantly focus on emotion recognition, lacking systematic modeling of higher-order emotional reasoning—such as causal attribution and behavioral prediction—thus limiting natural human-AI interaction. Method: We introduce EmoReason, the first multi-round, multimodal benchmark for deep emotional understanding, comprising 1,451 real-world videos and 5,101 progressive questions. We propose a multi-agent collaborative framework wherein specialized agents model contextual background, interpersonal relationships, and event-level details to enable fine-grained, interpretable emotional reasoning. Contribution/Results: Experiments reveal severe performance limitations of state-of-the-art MLLMs on EmoReason, confirming its challenge and validity. Our method achieves significant improvements across emotion recognition, causal attribution, and behavioral prediction—establishing the first effective capability for multimodal emotional reasoning and offering a novel paradigm for embodied intelligence and affective computing.
📝 Abstract
Multimodal large language models (MLLMs) have been widely applied across various fields due to their powerful perceptual and reasoning capabilities. In the realm of psychology, these models hold promise for a deeper understanding of human emotions and behaviors. However, recent research primarily focuses on enhancing their emotion recognition abilities, leaving the substantial potential in emotion reasoning, which is crucial for improving the naturalness and effectiveness of human-machine interactions. Therefore, in this paper, we introduce a multi-turn multimodal emotion understanding and reasoning (MTMEUR) benchmark, which encompasses 1,451 video data from real-life scenarios, along with 5,101 progressive questions. These questions cover various aspects, including emotion recognition, potential causes of emotions, future action prediction, etc. Besides, we propose a multi-agent framework, where each agent specializes in a specific aspect, such as background context, character dynamics, and event details, to improve the system's reasoning capabilities. Furthermore, we conduct experiments with existing MLLMs and our agent-based method on the proposed benchmark, revealing that most models face significant challenges with this task.