🤖 AI Summary
This paper addresses the challenge of large multimodal models (LMMs) failing to reliably adhere to diverse, fine-grained evaluation criteria when serving as multimodal evaluators. To this end, we introduce Multi-Crit—the first benchmark explicitly designed to assess multi-criteria alignment capability across open-ended generation and verifiable reasoning tasks. We propose three novel quantitative metrics: criterion adherence, criterion-switching flexibility, and preference-conflict identification. Multi-Crit comprises challenging sample pairs annotated by human experts under multiple, often conflicting criteria. We systematically evaluate several alignment strategies—including critic fine-tuning, inference-time fine-tuning, and test-time scaling—across 25 mainstream LMMs. Results reveal significant deficiencies in criterion consistency and fine-grained generalization, especially among open-source LMMs; even proprietary models fall short of reliable performance. These findings expose a critical bottleneck in current multimodal evaluation capabilities.
📝 Abstract
Large multimodal models (LMMs) are increasingly adopted as judges in multimodal evaluation systems due to their strong instruction following and consistency with human preferences. However, their ability to follow diverse, fine-grained evaluation criteria remains underexplored. We develop Multi-Crit, a benchmark for evaluating multimodal judges on their capacity to follow pluralistic criteria and produce reliable criterion-level judgments. Covering both open-ended generation and verifiable reasoning tasks, Multi-Crit is built through a rigorous data curation pipeline that gathers challenging response pairs with multi-criterion human annotations. It further introduces three novel metrics for systematically assessing pluralistic adherence, criterion-switching flexibility, and the ability to recognize criterion-level preference conflicts. Comprehensive analysis of 25 LMMs reveals that 1) proprietary models still struggle to maintain consistent adherence to pluralistic criteria--especially in open-ended evaluation; 2) open-source models lag further behind in flexibly following diverse criteria; and 3) critic fine-tuning with holistic judgment signals enhances visual grounding but fails to generalize to pluralistic criterion-level judgment. Additional analyses on reasoning fine-tuning, test-time scaling, and boundary consistency between open-source and proprietary models further probe the limits of current multimodal judges. As a pioneering study, Multi-Crit lays the foundation for building reliable and steerable multimodal AI evaluation.