VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language Reward Models

📅 2025-03-10
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language reward models (VLRMs) are evaluated using benchmarks that focus narrowly on single capabilities, hindering comprehensive assessment. To address this, we introduce VLRMBench—the first holistic benchmark covering three orthogonal dimensions: process understanding, outcome judgment, and critical generation. It comprises 12,634 diverse items across three challenging domains: mathematical reasoning, hallucination detection, and multi-image understanding. Crucially, we systematically decouple and operationalize these three core VLRM capabilities, designing 12 fine-grained, multi-stage evaluation tasks that transcend conventional binary preference paradigms. Extensive validation on 26 state-of-the-art models—including 21 open-source and 5 proprietary models—demonstrates VLRMBench’s strong discriminative power and high difficulty; for instance, GPT-4o achieves only 76.0% accuracy on the “predicting future states” task. All data, annotation guidelines, and fully reproducible evaluation code are publicly released.

Technology Category

Application Category

📝 Abstract
Although large visual-language models (LVLMs) have demonstrated strong performance in multimodal tasks, errors may occasionally arise due to biases during the reasoning process. Recently, reward models (RMs) have become increasingly pivotal in the reasoning process. Specifically, process RMs evaluate each reasoning step, outcome RMs focus on the assessment of reasoning results, and critique RMs perform error analysis on the entire reasoning process, followed by corrections. However, existing benchmarks for vision-language RMs (VLRMs) typically assess only a single aspect of their capabilities (e.g., distinguishing between two answers), thus limiting the all-round evaluation and restricting the development of RMs in the visual-language domain. To address this gap, we propose a comprehensive and challenging benchmark, dubbed as VLRMBench, encompassing 12,634 questions. VLRMBench is constructed based on three distinct types of datasets, covering mathematical reasoning, hallucination understanding, and multi-image understanding. We design 12 tasks across three major categories, focusing on evaluating VLRMs in the aspects of process understanding, outcome judgment, and critique generation. Extensive experiments are conducted on 21 open-source models and 5 advanced closed-source models, highlighting the challenges posed by VLRMBench. For instance, in the `Forecasting Future', a binary classification task, the advanced GPT-4o achieves only a 76.0% accuracy. Additionally, we perform comprehensive analytical studies, offering valuable insights for the future development of VLRMs. We anticipate that VLRMBench will serve as a pivotal benchmark in advancing VLRMs. Code and datasets will be available at https://github.com/JCruan519/VLRMBench.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations in evaluating vision-language reward models (VLRMs).
Proposes VLRMBench, a benchmark for comprehensive VLRM assessment.
Focuses on process understanding, outcome judgment, and critique generation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive benchmark for vision-language reward models
Evaluates process, outcome, and critique aspects
Includes 12,634 questions across diverse datasets
🔎 Similar Papers
No similar papers found.