RBench-V: A Primary Assessment for Visual Reasoning Models with Multi-modal Outputs

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal reasoning benchmarks predominantly evaluate models’ ability to process multimodal inputs (e.g., images and text) for textual inference, overlooking their capacity to *generate* multimodal outputs—such as auxiliary diagrams, sketches, or geometric constructions—to actively support visual reasoning. Method: We introduce RBench-V, the first benchmark centered on *multimodal output* for visual reasoning. It comprises 803 carefully curated problems spanning mathematics, physics, counting, and logic games, all requiring explicit visual operations (e.g., drawing auxiliary lines or generating schematic diagrams). RBench-V pioneers the evaluation of “Multimodal Chain-of-Thought” (M-CoT), where active image construction is a core assessment criterion. It integrates expert-crafted problem sets, cross-model evaluation (including o3, Gemini 2.5 Pro, Qwen2.5-VL), and human performance baselines. Results: State-of-the-art model o3 achieves only 25.8% accuracy—far below human performance (82.3%), exposing a fundamental bottleneck in current multimodal models’ capacity for vision-driven, generative reasoning.

Technology Category

Application Category

📝 Abstract
The rapid advancement of native multi-modal models and omni-models, exemplified by GPT-4o, Gemini, and o3, with their capability to process and generate content across modalities such as text and images, marks a significant milestone in the evolution of intelligence. Systematic evaluation of their multi-modal output capabilities in visual thinking processes (also known as multi-modal chain of thought, M-CoT) becomes critically important. However, existing benchmarks for evaluating multi-modal models primarily focus on assessing multi-modal inputs and text-only reasoning while neglecting the importance of reasoning through multi-modal outputs. In this paper, we present a benchmark, dubbed RBench-V, designed to assess models' vision-indispensable reasoning abilities. To construct RBench-V, we carefully hand-pick 803 questions covering math, physics, counting, and games. Unlike previous benchmarks that typically specify certain input modalities, RBench-V presents problems centered on multi-modal outputs, which require image manipulation such as generating novel images and constructing auxiliary lines to support the reasoning process. We evaluate numerous open- and closed-source models on RBench-V, including o3, Gemini 2.5 Pro, Qwen2.5-VL, etc. Even the best-performing model, o3, achieves only 25.8% accuracy on RBench-V, far below the human score of 82.3%, highlighting that current models struggle to leverage multi-modal reasoning. Data and code are available at https://evalmodels.github.io/rbenchv
Problem

Research questions and friction points this paper is trying to address.

Evaluating multi-modal output capabilities in visual reasoning
Assessing vision-indispensable reasoning with image manipulation tasks
Benchmarking models' performance in multi-modal chain of thought
Innovation

Methods, ideas, or system contributions that make the work stand out.

RBench-V benchmark for multi-modal output evaluation
803 hand-picked questions for visual reasoning tasks
Image manipulation required for reasoning process
🔎 Similar Papers
No similar papers found.