🤖 AI Summary
Automatic evaluation of video understanding models faces two key challenges: conventional metrics (e.g., BLEU, ROUGE, BERTScore) poorly capture fine-grained human judgments, while human evaluation is prohibitively expensive. To address this, we propose VideoJudge—a generative-judgment co-training framework built upon compact multimodal large language models (3B/7B). It introduces a novel “generate-then-filter” closed-loop training paradigm that explicitly emphasizes the critical role of raw video inputs in judgment quality. Through guided generation, dynamic response filtering, and self-bootstrapping iterative optimization, VideoJudge demonstrates superior performance on three out of four meta-evaluation benchmarks—outperforming significantly larger baselines (e.g., Qwen2.5-VL 32B/72B). Our results empirically validate that carefully designed small-scale models can surpass orders-of-magnitude-larger counterparts in specialized video evaluation tasks. VideoJudge establishes a new paradigm for efficient, accurate, and scalable automatic assessment of video understanding capabilities.
📝 Abstract
Precisely evaluating video understanding models remains challenging: commonly used metrics such as BLEU, ROUGE, and BERTScore fail to capture the fineness of human judgment, while obtaining such judgments through manual evaluation is costly. Recent work has explored using large language models (LLMs) or multimodal LLMs (MLLMs) as evaluators, but their extension to video understanding remains relatively unexplored. In this work, we introduce VideoJudge, a 3B and 7B-sized MLLM judge specialized to evaluate outputs from video understanding models ( extit{i.e.}, text responses conditioned on videos). To train VideoJudge, our recipe builds on the interplay between a generator and an evaluator: the generator is prompted to produce responses conditioned on a target rating, and responses not matching the evaluator's rating are discarded. Across three out of four meta-evaluation benchmarks, VideoJudge-7B outperforms larger MLLM judge baselines such as Qwen2.5-VL (32B and 72B). Notably, we find that LLM judges (Qwen3) models perform worse than MLLM judges (Qwen2.5-VL) and long chain-of-thought reasoning does not improve performance, indicating that providing video inputs is crucial for evaluation of video understanding tasks.