🤖 AI Summary
Existing multimodal benchmarks suffer from systematic vision-irrelevant biases, enabling vision-unaware large language models (LLMs) to achieve spuriously high scores and severely undermining evaluation validity. To address this, we propose MMEvalPro—a novel benchmark introducing the first “perception question–knowledge-anchor question–original question” triplet evaluation paradigm. It integrates human-in-the-loop annotation, a three-stage evaluation pipeline, and cross-benchmark data fusion (MMMU/ScienceQA/MathVista) to rigorously eliminate non-visual cues in multiple-choice questions (MCQs). MMEvalPro comprises 2,138 triplets (6,414 questions), with two-thirds annotated by domain experts. Experiments reveal that multimodal models (LMMs) underperform humans by 31.73% (vs. only 8.03% on prior benchmarks), and the best LLM lags the best LMM by 23.09% (vs. only 14.64% previously)—substantially widening performance gaps, reducing Type-I errors, and enhancing both challenge level and assessment reliability.
📝 Abstract
Large Multimodal Models (LMMs) exhibit impressive cross-modal understanding and reasoning abilities, often assessed through multiple-choice questions (MCQs) that include an image, a question, and several options. However, many benchmarks used for such evaluations suffer from systematic biases. Remarkably, Large Language Models (LLMs) without any visual perception capabilities achieve non-trivial performance, undermining the credibility of these evaluations. To address this issue while maintaining the efficiency of MCQ evaluations, we propose MMEvalPro, a benchmark designed to avoid Type-I errors through a trilogy evaluation pipeline and more rigorous metrics. For each original question from existing benchmarks, human annotators augment it by creating one perception question and one knowledge anchor question through a meticulous annotation process. MMEvalPro comprises $2,138$ question triplets, totaling $6,414$ distinct questions. Two-thirds of these questions are manually labeled by human experts, while the rest are sourced from existing benchmarks (MMMU, ScienceQA, and MathVista). Compared with the existing benchmarks, our experiments with the latest LLMs and LMMs demonstrate that MMEvalPro is more challenging (the best LMM lags behind human performance by $31.73%$, compared to an average gap of $8.03%$ in previous benchmarks) and more trustworthy (the best LLM trails the best LMM by $23.09%$, whereas the gap for previous benchmarks is just $14.64%$). Our in-depth analysis explains the reason for the large performance gap and justifies the trustworthiness of evaluation, underscoring its significant potential for advancing future research.