🤖 AI Summary
Current unified multimodal models lack an integrated evaluation framework that operates without auxiliary models or annotated images, comprehensively covers both understanding and generation, and sufficiently addresses benchmark diversity and instruction-following capability assessment. To address these gaps, we propose UniEval—the first zero-shot, fully instruction-driven unified evaluation framework. Our approach comprises three key contributions: (1) UniBench, a high-challenge benchmark comprising 81 fine-grained task categories; (2) a holistic evaluation paradigm enabling joint cross-modal task assessment; and (3) UniScore, a transferable scoring function explicitly modeled to achieve significantly higher correlation with human judgments than state-of-the-art metrics (ρ > 0.85). Extensive experiments demonstrate that UniEval precisely characterizes emergent capabilities of unified models—including instruction adherence, cross-task generalization, and multimodal synergy—thereby establishing new performance frontiers.
📝 Abstract
The emergence of unified multimodal understanding and generation models is rapidly attracting attention because of their ability to enhance instruction-following capabilities while minimizing model redundancy. However, there is a lack of a unified evaluation framework for these models, which would enable an elegant, simplified, and overall evaluation. Current models conduct evaluations on multiple task-specific benchmarks, but there are significant limitations, such as the lack of overall results, errors from extra evaluation models, reliance on extensive labeled images, benchmarks that lack diversity, and metrics with limited capacity for instruction-following evaluation. To tackle these challenges, we introduce UniEval, the first evaluation framework designed for unified multimodal models without extra models, images, or annotations. This facilitates a simplified and unified evaluation process. The UniEval framework contains a holistic benchmark, UniBench (supports both unified and visual generation models), along with the corresponding UniScore metric. UniBench includes 81 fine-grained tags contributing to high diversity. Experimental results indicate that UniBench is more challenging than existing benchmarks, and UniScore aligns closely with human evaluations, surpassing current metrics. Moreover, we extensively evaluated SoTA unified and visual generation models, uncovering new insights into Univeral's unique values.