🤖 AI Summary
This work addresses the limited spatial reasoning capabilities of current vision-language models (VLMs) in real-world, unconstrained scenarios, where visual noise and diverse spatial relationships pose significant challenges. To this end, we introduce SpatiaLab—the first comprehensive benchmark for spatial reasoning in natural settings—encompassing 30 fine-grained tasks across six categories: relative position, depth occlusion, orientation, scale, navigation, and 3D geometry. The benchmark includes 1,400 real-world visual question-answering pairs and supports both multiple-choice and open-ended evaluation formats. Systematic evaluations of leading open- and closed-source, general-purpose and specialized VLMs reveal a substantial performance gap compared to human capabilities—for instance, InternVL3.5-72B achieves only 54.93% accuracy on multiple-choice questions versus 87.57% for humans—highlighting critical bottlenecks in complex spatial understanding.
📝 Abstract
Spatial reasoning is a fundamental aspect of human cognition, yet it remains a major challenge for contemporary vision-language models (VLMs). Prior work largely relied on synthetic or LLM-generated environments with limited task designs and puzzle-like setups, failing to capture the real-world complexity, visual noise, and diverse spatial relationships that VLMs encounter. To address this, we introduce SpatiaLab, a comprehensive benchmark for evaluating VLMs'spatial reasoning in realistic, unconstrained contexts. SpatiaLab comprises 1,400 visual question-answer pairs across six major categories: Relative Positioning, Depth&Occlusion, Orientation, Size&Scale, Spatial Navigation, and 3D Geometry, each with five subcategories, yielding 30 distinct task types. Each subcategory contains at least 25 questions, and each main category includes at least 200 questions, supporting both multiple-choice and open-ended evaluation. Experiments across diverse state-of-the-art VLMs, including open- and closed-source models, reasoning-focused, and specialized spatial reasoning models, reveal a substantial gap in spatial reasoning capabilities compared with humans. In the multiple-choice setup, InternVL3.5-72B achieves 54.93% accuracy versus 87.57% for humans. In the open-ended setting, all models show a performance drop of around 10-25%, with GPT-5-mini scoring highest at 40.93% versus 64.93% for humans. These results highlight key limitations in handling complex spatial relationships, depth perception, navigation, and 3D geometry. By providing a diverse, real-world evaluation framework, SpatiaLab exposes critical challenges and opportunities for advancing VLMs'spatial reasoning, offering a benchmark to guide future research toward robust, human-aligned spatial understanding. SpatiaLab is available at: https://spatialab-reasoning.github.io/.