MME-Emotion: A Holistic Evaluation Benchmark for Emotional Intelligence in Multimodal Large Language Models

📅 2025-08-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing emotion benchmarks inadequately assess multimodal large language models’ (MLLMs) cross-scenario generalization and affective attribution reasoning. To address this, we introduce MME-Emotion—the first comprehensive video benchmark for affective intelligence, comprising 6,000+ short video clips and eight emotion-centric tasks. Our method establishes a scalable, diverse, and protocol-unified evaluation framework, innovatively integrating hybrid metrics and a multi-agent analytical architecture to systematically disentangle emotion recognition from chain-of-thought (CoT) reasoning—marking the first such decomposition in the literature. Leveraging multimodal question-answer pair construction, fine-grained emotion trigger annotation, and automated analysis, we evaluate 20 state-of-the-art MLLMs. Results reveal severe limitations in current affective intelligence: the best-performing model achieves only 39.3% emotion recognition accuracy and a CoT score of 56.0%, with distinct capability trajectories observed between generalist and specialist models.

Technology Category

Application Category

📝 Abstract
Recent advances in multimodal large language models (MLLMs) have catalyzed transformative progress in affective computing, enabling models to exhibit emergent emotional intelligence. Despite substantial methodological progress, current emotional benchmarks remain limited, as it is still unknown: (a) the generalization abilities of MLLMs across distinct scenarios, and (b) their reasoning capabilities to identify the triggering factors behind emotional states. To bridge these gaps, we present extbf{MME-Emotion}, a systematic benchmark that assesses both emotional understanding and reasoning capabilities of MLLMs, enjoying extit{scalable capacity}, extit{diverse settings}, and extit{unified protocols}. As the largest emotional intelligence benchmark for MLLMs, MME-Emotion contains over 6,000 curated video clips with task-specific questioning-answering (QA) pairs, spanning broad scenarios to formulate eight emotional tasks. It further incorporates a holistic evaluation suite with hybrid metrics for emotion recognition and reasoning, analyzed through a multi-agent system framework. Through a rigorous evaluation of 20 advanced MLLMs, we uncover both their strengths and limitations, yielding several key insights: ding{182} Current MLLMs exhibit unsatisfactory emotional intelligence, with the best-performing model achieving only $39.3%$ recognition score and $56.0%$ Chain-of-Thought (CoT) score on our benchmark. ding{183} Generalist models (emph{e.g.}, Gemini-2.5-Pro) derive emotional intelligence from generalized multimodal understanding capabilities, while specialist models (emph{e.g.}, R1-Omni) can achieve comparable performance through domain-specific post-training adaptation. By introducing MME-Emotion, we hope that it can serve as a foundation for advancing MLLMs' emotional intelligence in the future.
Problem

Research questions and friction points this paper is trying to address.

Assessing generalization abilities of MLLMs across diverse emotional scenarios
Evaluating reasoning capabilities of MLLMs for emotional trigger identification
Addressing limitations in current emotional intelligence benchmarks for MLLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal benchmark for emotional intelligence evaluation
Over 6,000 curated video clips with QA pairs
Holistic evaluation suite with hybrid metrics
🔎 Similar Papers
No similar papers found.