MM-CRITIC: A Holistic Evaluation of Large Multimodal Models as Multimodal Critique

📅 2025-11-12
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation of critical reasoning capabilities in large multimodal models (LMMs). We propose MM-CRITIC, the first multidimensional benchmark specifically designed for evaluating critical reasoning in large vision-language models. It assesses three core capabilities—basic recognition, error correction, and cross-sample comparison—across eight task categories comprising 4,471 high-quality instances. Methodologically, MM-CRITIC innovatively integrates expert annotation, GPT-4o–assisted generation of reference critiques, and a scoring-guidance mechanism to establish a high-fidelity, multidimensional evaluation framework. Extensive experiments on mainstream LMMs reveal distinct capability disparities across dimensions and uncover intrinsic task difficulty hierarchies, validating the benchmark’s effectiveness and reproducibility. The dataset and code are publicly released, providing a standardized infrastructure for advancing research on multimodal critical reasoning.

Technology Category

Application Category

📝 Abstract
The ability of critique is vital for models to self-improve and serve as reliable AI assistants. While extensively studied in language-only settings, multimodal critique of Large Multimodal Models (LMMs) remains underexplored despite their growing capabilities in tasks like captioning and visual reasoning. In this work, we introduce MM-CRITIC, a holistic benchmark for evaluating the critique ability of LMMs across multiple dimensions: basic, correction, and comparison. Covering 8 main task types and over 500 tasks, MM-CRITIC collects responses from various LMMs with different model sizes and is composed of 4471 samples. To enhance the evaluation reliability, we integrate expert-informed ground answers into scoring rubrics that guide GPT-4o in annotating responses and generating reference critiques, which serve as anchors for trustworthy judgments. Extensive experiments validate the effectiveness of MM-CRITIC and provide a comprehensive assessment of leading LMMs'critique capabilities under multiple dimensions. Further analysis reveals some key insights, including the correlation between response quality and critique, and varying critique difficulty across evaluation dimensions. Our code is available at https://github.com/MichealZeng0420/MM-Critic.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multimodal critique ability of Large Multimodal Models
Assessing critique capabilities across basic, correction, and comparison dimensions
Benchmarking LMMs' critique reliability using expert-informed scoring rubrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MM-CRITIC benchmark for multimodal evaluation
Uses expert-informed rubrics with GPT-4o for annotation
Assesses critique capabilities across three evaluation dimensions
🔎 Similar Papers
No similar papers found.