🤖 AI Summary
Existing multimodal evaluation benchmarks assess visual understanding and generation capabilities in isolation, neglecting their bidirectional coupling and interdependence.
Method: We introduce the first interdisciplinary, unified multimodal benchmark—spanning eight reasoning-intensive domains (e.g., science, programming, mathematics)—featuring a novel bidirectional task paradigm: “understanding → generation” and “generation → understanding.” It incorporates verifiable intermediate reasoning steps and a joint text-visual scoring protocol, supported by structured prompting and standardized evaluation pipelines.
Contribution/Results: Experiments reveal substantial performance gaps and strong dependency between the two capabilities across mainstream models, empirically validating that bidirectional synergy is essential for cognitive alignment. Our benchmark enables fine-grained, cross-domain capability analysis and establishes a rigorous foundation for evaluating and advancing unified multimodal foundation models.
📝 Abstract
Unified multimodal models aim to jointly enable visual understanding and generation, yet current benchmarks rarely examine their true integration. Existing evaluations either treat the two abilities in isolation or overlook tasks that inherently couple them. To address this gap, we present Uni-MMMU, a comprehensive and discipline-aware benchmark that systematically unfolds the bidirectional synergy between generation and understanding across eight reasoning-centric domains, including science, coding, mathematics, and puzzles. Each task is bidirectionally coupled, demanding models to (i) leverage conceptual understanding to guide precise visual synthesis, or (ii) utilize generation as a cognitive scaffold for analytical reasoning. Uni-MMMU incorporates verifiable intermediate reasoning steps, unique ground truths, and a reproducible scoring protocol for both textual and visual outputs. Through extensive evaluation of state-of-the-art unified, generation-only, and understanding-only models, we reveal substantial performance disparities and cross-modal dependencies, offering new insights into when and how these abilities reinforce one another, and establishing a reliable foundation for advancing unified models.