MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models

📅 2025-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing MLLM benchmarks inadequately evaluate Unified Multimodal Large Models (U-MLLMs): traditional tasks lack standardization, undermining consistent cross-model comparison; and systematic evaluation of hybrid-modality generation—critical for multimodal reasoning—is absent. To address this, we introduce UMMBench, the first comprehensive benchmark tailored for U-MLLMs. It establishes a unified evaluation framework integrating comprehension and generation, pioneers five novel hybrid-modality tasks (e.g., image editing, joint vision-language reasoning), and proposes quantitative cross-modal capability metrics. Leveraging multi-dataset fusion, task decoupling, and horizontal evaluation across 12 state-of-the-art models—including Janus-Pro, EMU3, and Gemini 2.0-flash—we uncover substantial performance gaps in hybrid-modality tasks. All code and data are publicly released.

Technology Category

Application Category

📝 Abstract
Existing MLLM benchmarks face significant challenges in evaluating Unified MLLMs (U-MLLMs) due to: 1) lack of standardized benchmarks for traditional tasks, leading to inconsistent comparisons; 2) absence of benchmarks for mixed-modality generation, which fails to assess multimodal reasoning capabilities. We present a comprehensive evaluation framework designed to systematically assess U-MLLMs. Our benchmark includes: Standardized Traditional Task Evaluation. We sample from 12 datasets, covering 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies."2. Unified Task Assessment. We introduce five novel tasks testing multimodal reasoning, including image editing, commonsense QA with image generation, and geometric reasoning. 3. Comprehensive Model Benchmarking. We evaluate 12 leading U-MLLMs, such as Janus-Pro, EMU3, VILA-U, and Gemini2-flash, alongside specialized understanding (e.g., Claude-3.5-Sonnet) and generation models (e.g., DALL-E-3). Our findings reveal substantial performance gaps in existing U-MLLMs, highlighting the need for more robust models capable of handling mixed-modality tasks effectively. The code and evaluation data can be found in https://mme-unify.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Lack of standardized benchmarks for unified multimodal model evaluation
Absence of tasks assessing mixed-modality generation capabilities
Inconsistent performance measurement across diverse multimodal tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized evaluation for 10 tasks across 12 datasets
Five novel tasks testing multimodal reasoning capabilities
Benchmarked 12 leading U-MLLMs and specialized models