Your Reasoning Benchmark May Not Test Reasoning: Revealing Perception Bottleneck in Abstract Reasoning Benchmarks

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper challenges the validity of mainstream abstract reasoning benchmarks—such as ARC—in assessing genuine machine reasoning, arguing that performance bottlenecks stem primarily from visual perception limitations rather than inferential deficits. To address this, the authors propose a two-stage decoupled evaluation paradigm: (1) perceptual stage—input images are independently transcribed into natural language descriptions; and (2) reasoning stage—rule induction and application are performed exclusively on textual inputs. Through systematic experiments across Mini-ARC, ACRE, Bongard-LOGO, and diverse vision-language models (VLMs) paired with symbolic reasoners, they quantitatively demonstrate that ~80% of VLM failures originate from perceptual errors; adopting the proposed paradigm yields substantial accuracy improvements. Key contributions include: (1) identifying and characterizing a pervasive perception-dominant bias in abstract reasoning benchmarks; (2) establishing the first rigorously decoupled evaluation framework that isolates perception from reasoning; and (3) correcting the prevalent misattribution of poor performance to weak reasoning, thereby enabling more precise and faithful assessment of reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Reasoning benchmarks such as the Abstraction and Reasoning Corpus (ARC) and ARC-AGI are widely used to assess progress in artificial intelligence and are often interpreted as probes of core, so-called ``fluid'' reasoning abilities. Despite their apparent simplicity for humans, these tasks remain challenging for frontier vision-language models (VLMs), a gap commonly attributed to deficiencies in machine reasoning. We challenge this interpretation and hypothesize that the gap arises primarily from limitations in visual perception rather than from shortcomings in inductive reasoning. To verify this hypothesis, we introduce a two-stage experimental pipeline that explicitly separates perception and reasoning. In the perception stage, each image is independently converted into a natural-language description, while in the reasoning stage a model induces and applies rules using these descriptions. This design prevents leakage of cross-image inductive signals and isolates reasoning from perception bottlenecks. Across three ARC-style datasets, Mini-ARC, ACRE, and Bongard-LOGO, we show that the perception capability is the dominant factor underlying the observed performance gap by comparing the two-stage pipeline with against standard end-to-end one-stage evaluation. Manual inspection of reasoning traces in the VLM outputs further reveals that approximately 80 percent of model failures stem from perception errors. Together, these results demonstrate that ARC-style benchmarks conflate perceptual and reasoning challenges and that observed performance gaps may overstate deficiencies in machine reasoning. Our findings underscore the need for evaluation protocols that disentangle perception from reasoning when assessing progress in machine intelligence.
Problem

Research questions and friction points this paper is trying to address.

Challenges the assumption that reasoning benchmarks primarily test machine reasoning abilities
Identifies visual perception limitations as the main bottleneck in abstract reasoning tasks
Proposes separating perception and reasoning to accurately evaluate machine intelligence progress
Innovation

Methods, ideas, or system contributions that make the work stand out.

Separates perception and reasoning in two stages
Converts images to natural-language descriptions first
Isolates reasoning from perception bottlenecks explicitly
🔎 Similar Papers
No similar papers found.
Xinhe Wang
Xinhe Wang
University of Michigan
Data-Centric AILLM Agents
J
Jin Huang
University of Michigan
X
Xingjian Zhang
University of Michigan
T
Tianhao Wang
University of California San Diego
Jiaqi W. Ma
Jiaqi W. Ma
Assistant Professor, University of Illinois Urbana-Champaign
Data-Centric AIData AttributionTraining Data Curation