🤖 AI Summary
Current multimodal large language models (MLLMs) exhibit weak performance on abstract visual reasoning (AVR) tasks, primarily because existing benchmarks evaluate only final outputs—neglecting fine-grained assessment of multi-stage reasoning processes.
Method: We introduce MultiStAR, the first multi-stage AVR benchmark tailored for MLLMs, built upon RAVEN with a hierarchical task structure. It integrates human-crafted rule modeling and procedural generation to ensure controllability and scalability, enabling reproducible reasoning analysis across 17 mainstream MLLMs. We further propose MSEval, a fine-grained evaluation framework that concurrently measures correctness of intermediate reasoning steps and final answer accuracy.
Results: Experiments reveal that while MLLMs perform well in basic perceptual stages, their accuracy drops significantly during complex rule identification and relational deduction. This confirms the critical value of multi-stage evaluation in diagnosing fundamental reasoning deficits in MLLMs.
📝 Abstract
Current Multimodal Large Language Models (MLLMs) excel in general visual reasoning but remain underexplored in Abstract Visual Reasoning (AVR), which demands higher-order reasoning to identify abstract rules beyond simple perception. Existing AVR benchmarks focus on single-step reasoning, emphasizing the end result but neglecting the multi-stage nature of reasoning process. Past studies found MLLMs struggle with these benchmarks, but it doesn't explain how they fail. To address this gap, we introduce MultiStAR, a Multi-Stage AVR benchmark, based on RAVEN, designed to assess reasoning across varying levels of complexity. Additionally, existing metrics like accuracy only focus on the final outcomes while do not account for the correctness of intermediate steps. Therefore, we propose a novel metric, MSEval, which considers the correctness of intermediate steps in addition to the final outcomes. We conduct comprehensive experiments on MultiStAR using 17 representative close-source and open-source MLLMs. The results reveal that while existing MLLMs perform adequately on basic perception tasks, they continue to face challenges in more complex rule detection stages.