ARC Is a Vision Problem!

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
ARC benchmarks abstract reasoning, yet existing approaches predominantly linguify the task—relying on large language models (LLMs) or recurrent reasoning architectures—thereby neglecting its intrinsic visual nature. This work pioneers a purely visual formulation of ARC, proposing an end-to-end image-to-image translation framework. We introduce a canvas-based representation to model inputs and outputs at the pixel level, employ a standard Vision Transformer (ViT) architecture trained from scratch, and integrate test-time learning for adaptive generalization. Crucially, our method operates without any linguistic priors, textual interfaces, or symbolic grounding. Evaluated on the ARC-1 benchmark, it achieves 60.4% accuracy—substantially outperforming prior pure-vision baselines and approaching both state-of-the-art LLMs and human-level performance. These results empirically validate the viability and promise of a vision-centric paradigm for solving abstract reasoning tasks.

Technology Category

Application Category

📝 Abstract
The Abstraction and Reasoning Corpus (ARC) is designed to promote research on abstract reasoning, a fundamental aspect of human intelligence. Common approaches to ARC treat it as a language-oriented problem, addressed by large language models (LLMs) or recurrent reasoning models. However, although the puzzle-like tasks in ARC are inherently visual, existing research has rarely approached the problem from a vision-centric perspective. In this work, we formulate ARC within a vision paradigm, framing it as an image-to-image translation problem. To incorporate visual priors, we represent the inputs on a"canvas"that can be processed like natural images. It is then natural for us to apply standard vision architectures, such as a vanilla Vision Transformer (ViT), to perform image-to-image mapping. Our model is trained from scratch solely on ARC data and generalizes to unseen tasks through test-time training. Our framework, termed Vision ARC (VARC), achieves 60.4% accuracy on the ARC-1 benchmark, substantially outperforming existing methods that are also trained from scratch. Our results are competitive with those of leading LLMs and close the gap to average human performance.
Problem

Research questions and friction points this paper is trying to address.

ARC is reframed as a visual image-to-image translation problem
Existing approaches overlook visual nature by treating ARC linguistically
Vision-centric framework bridges performance gap toward human reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framing ARC as image-to-image translation problem
Using Vision Transformer for visual reasoning tasks
Employing test-time training for generalization
🔎 Similar Papers
No similar papers found.