Self-Improving VLM Judges Without Human Annotations

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual-language model (VLM) evaluators heavily rely on human-annotated preferences, hindering scalability and adaptability to rapid VLM advancements. Method: We propose the first fully annotation-free, automated VLM evaluation framework, built upon an end-to-end self-training paradigm. Leveraging Llama-3.2-11B, it autonomously generates multimodal instruction-response pairs and reasoning traces, then refines evaluator capability iteratively via chain-of-thought quality filtering and supervised fine-tuning. Contribution/Results: Crucially, we reformulate evaluation as a self-guided, multi-stage generative task—eliminating dependence on manual annotations. On VL-RewardBench, our evaluator achieves a significant accuracy improvement from 0.38 to 0.51 over baselines, outperforming GPT-4o and Claude 3.5 Sonnet across multiple metrics—particularly in reasoning consistency and hallucination detection.

Technology Category

Application Category

📝 Abstract
Effective judges of Vision-Language Models (VLMs) are crucial for model development. Current methods for training VLM judges mainly rely on large-scale human preference annotations. However, such an approach is costly, and the annotations easily become obsolete as models rapidly improve. In this work, we present a framework to self-train a VLM judge model without any human preference annotations, using only self-synthesized data. Our method is iterative and has three stages: (1) generate diverse multimodal instruction-response pairs at varying quality levels, (2) generate reasoning traces and judgments for each pair, removing the ones that do not match our expected quality levels, and (3) training on correct judge answers and their reasoning traces. We evaluate the resulting judge on Multimodal RewardBench and VL-RewardBench across domains: correctness, preference, reasoning, safety, and visual question-answering. Our method improves a Llama-3.2-11B multimodal judge from 0.38 to 0.51 in overall accuracy on VL-RewardBench, often outperforming much larger models including Llama-3.2-90B, GPT-4o, and Claude 3.5 Sonnet, with particularly strong gains in general, hallucination, and reasoning dimensions. The overall strength of these human-annotation-free results suggest the potential for a future self-judge that evolves alongside rapidly improving VLM capabilities.
Problem

Research questions and friction points this paper is trying to address.

Training VLM judges without costly human preference annotations
Overcoming annotation obsolescence as vision-language models rapidly improve
Creating self-synthesized data for iterative VLM judge training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-training VLM judges without human annotations
Iterative three-stage framework with self-synthesized data
Improves accuracy by filtering and training on reasoning traces
🔎 Similar Papers
2024-01-18International Conference on Machine LearningCitations: 264