Unfolding Spatial Cognition: Evaluating Multimodal Models on Visual Simulations

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI benchmarks predominantly emphasize linguistic reasoning while systematically neglecting systematic evaluation of non-linguistic, multi-step visual simulation capabilities. Method: We introduce STARE—a novel 4K-task benchmark explicitly designed for multi-step visual simulation—covering 2D/3D geometric transformations, cube net folding, tangram assembly, and real-world spatial reasoning. Leveraging synthetically generated visual reasoning tasks and human behavioral experiments (response time and accuracy), we conduct cross-model evaluation on state-of-the-art multimodal models including GPT-4o, Claude-3.5, and Gemini-2.0 Flash. Results: Models achieve strong performance on simple 2D tasks but degrade to chance-level accuracy on 3D folding and tangram tasks requiring intermediate visual representations; in contrast, humans accelerate problem-solving by 7.5 seconds through such internal simulations. This work identifies a critical bottleneck in multimodal large models’ visual simulation chains and establishes a new paradigm for embodied intelligence and spatial cognition assessment.

Technology Category

Application Category

📝 Abstract
Spatial cognition is essential for human intelligence, enabling problem-solving through visual simulations rather than solely relying on verbal reasoning. However, existing AI benchmarks primarily assess verbal reasoning, neglecting the complexities of non-verbal, multi-step visual simulation. We introduce STARE(Spatial Transformations and Reasoning Evaluation), a benchmark designed to rigorously evaluate multimodal large language models on tasks better solved through multi-step visual simulation. STARE features 4K tasks spanning foundational geometric transformations (2D and 3D), integrated spatial reasoning (cube net folding and tangram puzzles), and real-world spatial reasoning (perspective and temporal reasoning), reflecting practical cognitive challenges like object assembly, mechanical diagram interpretation, and everyday spatial navigation. Our evaluations show that models excel at reasoning over simpler 2D transformations, but perform close to random chance on more complex tasks like 3D cube net folding and tangram puzzles that require multi-step visual simulations. Humans achieve near-perfect accuracy but take considerable time (up to 28.9s) on complex tasks, significantly speeding up (down by 7.5 seconds on average) with intermediate visual simulations. In contrast, models exhibit inconsistent performance gains from visual simulations, improving on most tasks but declining in specific cases like tangram puzzles (GPT-4o, o1) and cube net folding (Claude-3.5, Gemini-2.0 Flash), indicating that models may not know how to effectively leverage intermediate visual information.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI models on visual spatial reasoning tasks
Assessing multimodal models' ability in multi-step visual simulations
Benchmarking performance on complex 3D and real-world spatial challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

STARE benchmark for multimodal spatial reasoning
Evaluates 4K tasks on visual simulations
Models struggle with complex 3D transformations
🔎 Similar Papers
No similar papers found.