🤖 AI Summary
Current vision-language models (VLMs) exhibit significant limitations in higher-order spatial reasoning; existing benchmarks only assess basic spatial relations (e.g., left/right, near/far), lacking cognitive depth and task complexity. Method: We introduce OmniSpatial—the first comprehensive spatial reasoning benchmark grounded in cognitive psychology—spanning four dimensions: dynamic reasoning, complex spatial logic, spatial interaction, and viewpoint transformation, with 50 fine-grained tasks and 1.5K high-quality question-answer pairs. Our framework systematically defines and evaluates VLMs’ higher-order spatial cognition deficits via a multi-dimensional, fine-grained, and cognitively aligned evaluation protocol, built through hybrid web crawling and expert annotation, supporting unified assessment of both open- and closed-source VLMs as well as specialized spatial models. Contribution/Results: Experiments reveal that state-of-the-art VLMs achieve <42% average accuracy, exposing critical weaknesses in embodied reasoning and interpretability—thereby establishing a foundational benchmark and clarifying key research directions for advancing spatial intelligence.
📝 Abstract
Spatial reasoning is a key aspect of cognitive psychology and remains a major bottleneck for current vision-language models (VLMs). While extensive research has aimed to evaluate or improve VLMs' understanding of basic spatial relations, such as distinguishing left from right, near from far, and object counting, these tasks represent only the most fundamental level of spatial reasoning. In this work, we introduce OmniSpatial, a comprehensive and challenging benchmark for spatial reasoning, grounded in cognitive psychology. OmniSpatial covers four major categories: dynamic reasoning, complex spatial logic, spatial interaction, and perspective-taking, with 50 fine-grained subcategories. Through Internet data crawling and careful manual annotation, we construct over 1.5K question-answer pairs. Extensive experiments show that both open- and closed-source VLMs, as well as existing reasoning and spatial understanding models, exhibit significant limitations in comprehensive spatial understanding. We further analyze failure cases and propose potential directions for future research.