🤖 AI Summary
Existing multimodal large language model (MLLM) vision evaluation benchmarks emphasize salient object recognition or coarse-grained reasoning, neglecting fine-grained local visual cues—averaging only 0.25% of image area—and their integration with domain knowledge for complex inference. Method: We propose VER-Bench, the first systematic benchmark for evaluating fine-grained, evidence-driven visual reasoning. It introduces a six-category evidence framework covering geography, temporal relations, context, and more; incorporates structured visual cue localization and multi-step reasoning path annotation; and employs 374 expert-crafted questions for interpretable, knowledge-augmented, multi-dimensional evaluation. Contribution/Results: Experiments reveal substantial deficiencies in state-of-the-art MLLMs regarding extraction of minute critical visual evidence and construction of coherent evidentiary chains. VER-Bench thus establishes a rigorous, human-aligned assessment protocol and identifies concrete directions for advancing human-like visual understanding in MLLMs.
📝 Abstract
With the rapid development of MLLMs, evaluating their visual capabilities has become increasingly crucial. Current benchmarks primarily fall into two main types: basic perception benchmarks, which focus on local details but lack deep reasoning (e.g., "what is in the image?"), and mainstream reasoning benchmarks, which concentrate on prominent image elements but may fail to assess subtle clues requiring intricate analysis. However, profound visual understanding and complex reasoning depend more on interpreting subtle, inconspicuous local details than on perceiving salient, macro-level objects. These details, though occupying minimal image area, often contain richer, more critical information for robust analysis. To bridge this gap, we introduce the VER-Bench, a novel framework to evaluate MLLMs' ability to: 1) identify fine-grained visual clues, often occupying on average just 0.25% of the image area; 2) integrate these clues with world knowledge for complex reasoning. Comprising 374 carefully designed questions across Geospatial, Temporal, Situational, Intent, System State, and Symbolic reasoning, each question in VER-Bench is accompanied by structured evidence: visual clues and question-related reasoning derived from them. VER-Bench reveals current models' limitations in extracting subtle visual evidence and constructing evidence-based arguments, highlighting the need to enhance models's capabilities in fine-grained visual evidence extraction, integration, and reasoning for genuine visual understanding and human-like analysis. Dataset and additional materials are available https://github.com/verbta/ACMMM-25-Materials.