VER-Bench: Evaluating MLLMs on Reasoning with Fine-Grained Visual Evidence

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language model (MLLM) vision evaluation benchmarks emphasize salient object recognition or coarse-grained reasoning, neglecting fine-grained local visual cues—averaging only 0.25% of image area—and their integration with domain knowledge for complex inference. Method: We propose VER-Bench, the first systematic benchmark for evaluating fine-grained, evidence-driven visual reasoning. It introduces a six-category evidence framework covering geography, temporal relations, context, and more; incorporates structured visual cue localization and multi-step reasoning path annotation; and employs 374 expert-crafted questions for interpretable, knowledge-augmented, multi-dimensional evaluation. Contribution/Results: Experiments reveal substantial deficiencies in state-of-the-art MLLMs regarding extraction of minute critical visual evidence and construction of coherent evidentiary chains. VER-Bench thus establishes a rigorous, human-aligned assessment protocol and identifies concrete directions for advancing human-like visual understanding in MLLMs.

Technology Category

Application Category

📝 Abstract
With the rapid development of MLLMs, evaluating their visual capabilities has become increasingly crucial. Current benchmarks primarily fall into two main types: basic perception benchmarks, which focus on local details but lack deep reasoning (e.g., "what is in the image?"), and mainstream reasoning benchmarks, which concentrate on prominent image elements but may fail to assess subtle clues requiring intricate analysis. However, profound visual understanding and complex reasoning depend more on interpreting subtle, inconspicuous local details than on perceiving salient, macro-level objects. These details, though occupying minimal image area, often contain richer, more critical information for robust analysis. To bridge this gap, we introduce the VER-Bench, a novel framework to evaluate MLLMs' ability to: 1) identify fine-grained visual clues, often occupying on average just 0.25% of the image area; 2) integrate these clues with world knowledge for complex reasoning. Comprising 374 carefully designed questions across Geospatial, Temporal, Situational, Intent, System State, and Symbolic reasoning, each question in VER-Bench is accompanied by structured evidence: visual clues and question-related reasoning derived from them. VER-Bench reveals current models' limitations in extracting subtle visual evidence and constructing evidence-based arguments, highlighting the need to enhance models's capabilities in fine-grained visual evidence extraction, integration, and reasoning for genuine visual understanding and human-like analysis. Dataset and additional materials are available https://github.com/verbta/ACMMM-25-Materials.
Problem

Research questions and friction points this paper is trying to address.

Evaluating MLLMs' ability to identify fine-grained visual clues
Assessing integration of subtle details with world knowledge for reasoning
Addressing limitations in extracting and reasoning with visual evidence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates MLLMs on fine-grained visual evidence
Integrates subtle clues with world knowledge
Includes 374 questions across six reasoning types
🔎 Similar Papers
C
Chenhui Qiang
University of Chinese Academy of Sciences, Beijing, China
Zhaoyang Wei
Zhaoyang Wei
University of Chinese Academy of Sciences
Computer visionPoint PromptPointly SupervisionWeakly SupervisionInteractive Perception
Xumeng Han
Xumeng Han
University of Chinese Academy of Sciences
Computer Vision
Z
Zipeng Wang
University of Chinese Academy of Sciences, Beijing, China
S
Siyao Li
University of Chinese Academy of Sciences, Beijing, China
Xiangyuan Lan
Xiangyuan Lan
Pengcheng Laboratory
Multimodal LLMPlace RecognitionVisual TrackingPerson Re-identificationObject Detection
Jianbin Jiao
Jianbin Jiao
University of Chinese Academy of Sciences
Computer Vision
Z
Zhenjun Han
University of Chinese Academy of Sciences, Beijing, China