🤖 AI Summary
This work addresses the limitations of existing benchmarks for evaluating multimodal large language models (MLLMs) as judges, which categorize evaluations solely by task type and thus fail to capture the underlying judgment capabilities, leading to insufficient reliability. To this end, we propose M-JudgeBench—the first fine-grained, ten-dimensional benchmark specifically designed to assess judging competence—and introduce the Judge-MCTS framework, which leverages Monte Carlo Tree Search to generate high-quality paired reasoning trajectories. By decoupling chain-of-thought reasoning along explicit capability dimensions, our approach enables capability-driven training of judge models. The resulting M-Judger series significantly outperforms existing methods on both M-JudgeBench and established benchmarks, systematically revealing critical reliability deficiencies in current MLLMs concerning reasoning style, response length, and cross-model variability.
📝 Abstract
Using Multimodal Large Language Models (MLLMs) as judges to achieve precise and consistent evaluations has gradually become an emerging paradigm across various domains. Evaluating the capability and reliability of MLLM-as-a-judge systems is therefore essential for ensuring trustworthy assessment. Existing judge benchmarks categorize samples by task types but fail to capture the fundamental judgment capabilities required for reliable evaluation. In this work, we introduce M-JudgeBench, a ten-dimensional capability-oriented benchmark designed to comprehensively assess the judgment abilities of MLLMs. Our benchmark decomposes evaluation into pairwise Chain-of-Thought (CoT) comparison, length bias avoidance, and process error detection tasks, jointly covering ten fine-grained subtasks. This design enables diagnosis of model reliability across reasoning styles, response lengths, and cross-model variations. Systematic evaluation uncovers the systematic weaknesses in existing MLLM-as-a-judge systems. To address this issue, we further propose Judge-MCTS, a data construction framework generating pairwise reasoning trajectories with various correctness and length. Using Judge-MCTS, we construct an MCTS-augmented dataset and train M-Judger, a series of strong judge models. Extensive experiments demonstrate the superiority of M-Judger on existing judge benchmarks as well as M-JudgeBench. Overall, our work establishes a more principled foundation for evaluating MLLM-as-a-judge through M-JudgeBench and Judge-MCTS framework, paving the way for future research on judge model evaluation and capability-driven judge training.