VideoJudge: Bootstrapping Enables Scalable Supervision of MLLM-as-a-Judge for Video Understanding

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automatic evaluation of video understanding models faces two key challenges: conventional metrics (e.g., BLEU, ROUGE, BERTScore) poorly capture fine-grained human judgments, while human evaluation is prohibitively expensive. To address this, we propose VideoJudge—a generative-judgment co-training framework built upon compact multimodal large language models (3B/7B). It introduces a novel “generate-then-filter” closed-loop training paradigm that explicitly emphasizes the critical role of raw video inputs in judgment quality. Through guided generation, dynamic response filtering, and self-bootstrapping iterative optimization, VideoJudge demonstrates superior performance on three out of four meta-evaluation benchmarks—outperforming significantly larger baselines (e.g., Qwen2.5-VL 32B/72B). Our results empirically validate that carefully designed small-scale models can surpass orders-of-magnitude-larger counterparts in specialized video evaluation tasks. VideoJudge establishes a new paradigm for efficient, accurate, and scalable automatic assessment of video understanding capabilities.

Technology Category

Application Category

📝 Abstract
Precisely evaluating video understanding models remains challenging: commonly used metrics such as BLEU, ROUGE, and BERTScore fail to capture the fineness of human judgment, while obtaining such judgments through manual evaluation is costly. Recent work has explored using large language models (LLMs) or multimodal LLMs (MLLMs) as evaluators, but their extension to video understanding remains relatively unexplored. In this work, we introduce VideoJudge, a 3B and 7B-sized MLLM judge specialized to evaluate outputs from video understanding models ( extit{i.e.}, text responses conditioned on videos). To train VideoJudge, our recipe builds on the interplay between a generator and an evaluator: the generator is prompted to produce responses conditioned on a target rating, and responses not matching the evaluator's rating are discarded. Across three out of four meta-evaluation benchmarks, VideoJudge-7B outperforms larger MLLM judge baselines such as Qwen2.5-VL (32B and 72B). Notably, we find that LLM judges (Qwen3) models perform worse than MLLM judges (Qwen2.5-VL) and long chain-of-thought reasoning does not improve performance, indicating that providing video inputs is crucial for evaluation of video understanding tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluating video understanding models with precise human-like judgments
Developing specialized MLLM judges for video understanding evaluation
Creating scalable supervision to replace costly manual video assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

MLLM judge specialized for video understanding evaluation
Generator-evaluator interplay bootstraps training data
Smaller model outperforms larger baselines on benchmarks
🔎 Similar Papers
No similar papers found.