Beyond Perception: Evaluating Abstract Visual Reasoning through Multi-Stage Task

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) exhibit weak performance on abstract visual reasoning (AVR) tasks, primarily because existing benchmarks evaluate only final outputs—neglecting fine-grained assessment of multi-stage reasoning processes. Method: We introduce MultiStAR, the first multi-stage AVR benchmark tailored for MLLMs, built upon RAVEN with a hierarchical task structure. It integrates human-crafted rule modeling and procedural generation to ensure controllability and scalability, enabling reproducible reasoning analysis across 17 mainstream MLLMs. We further propose MSEval, a fine-grained evaluation framework that concurrently measures correctness of intermediate reasoning steps and final answer accuracy. Results: Experiments reveal that while MLLMs perform well in basic perceptual stages, their accuracy drops significantly during complex rule identification and relational deduction. This confirms the critical value of multi-stage evaluation in diagnosing fundamental reasoning deficits in MLLMs.

Technology Category

Application Category

📝 Abstract
Current Multimodal Large Language Models (MLLMs) excel in general visual reasoning but remain underexplored in Abstract Visual Reasoning (AVR), which demands higher-order reasoning to identify abstract rules beyond simple perception. Existing AVR benchmarks focus on single-step reasoning, emphasizing the end result but neglecting the multi-stage nature of reasoning process. Past studies found MLLMs struggle with these benchmarks, but it doesn't explain how they fail. To address this gap, we introduce MultiStAR, a Multi-Stage AVR benchmark, based on RAVEN, designed to assess reasoning across varying levels of complexity. Additionally, existing metrics like accuracy only focus on the final outcomes while do not account for the correctness of intermediate steps. Therefore, we propose a novel metric, MSEval, which considers the correctness of intermediate steps in addition to the final outcomes. We conduct comprehensive experiments on MultiStAR using 17 representative close-source and open-source MLLMs. The results reveal that while existing MLLMs perform adequately on basic perception tasks, they continue to face challenges in more complex rule detection stages.
Problem

Research questions and friction points this paper is trying to address.

Assessing Abstract Visual Reasoning (AVR) in MLLMs beyond perception
Addressing lack of multi-stage reasoning evaluation in AVR benchmarks
Proposing new metrics to measure intermediate-step correctness in AVR
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MultiStAR benchmark for multi-stage AVR
Proposes MSEval metric for intermediate step correctness
Evaluates 17 MLLMs on complex rule detection
🔎 Similar Papers
No similar papers found.