🤖 AI Summary
Vision-language models (VLMs) exhibit significant weaknesses in spatial reasoning—particularly in perspective taking—a critical cognitive capacity for embodied AI. Method: We introduce SpinBench, a cognitive-science-inspired diagnostic benchmark that establishes the first fine-grained taxonomy of spatial transformations (including translation, rotation, object-relative pose, and viewpoint change) and designs a progressive task hierarchy—from single- to multi-object scenarios—grounded in mental simulation paradigms. Contribution/Results: Through human reaction time analysis and large-scale evaluation across 37 state-of-the-art VLMs, we identify pervasive egocentric biases and systematic failures in rotational reasoning. Human accuracy reaches 91.2%, and reaction times correlate significantly with model performance, validating SpinBench’s diagnostic validity. SpinBench provides an interpretable, decomposable evaluation framework for spatial reasoning in VLMs, enabling fine-grained capability analysis beyond holistic accuracy.
📝 Abstract
We present SpinBench, a cognitively grounded diagnostic benchmark for evaluating spatial reasoning in vision language models (VLMs). SpinBench is designed around the core challenge of spatial reasoning: perspective taking, the ability to reason about how scenes and object relations change under viewpoint transformation. Since perspective taking requires multiple cognitive capabilities, such as recognizing objects across views, relative positions grounding, and mentally simulating transformations, SpinBench introduces a set of fine-grained diagnostic categories. Our categories target translation, rotation, object relative pose, and viewpoint change, and are progressively structured so that single-object simpler tasks scaffold toward the most demanding multi-object perspective-taking setting. We evaluate 37 state-of-the-art VLMs, both proprietary and open source. Results reveal systematic weaknesses: strong egocentric bias, poor rotational understanding, and inconsistencies under symmetrical and syntactic reformulations. Scaling analysis shows both smooth improvements and emergent capabilities. While human subjects achieve high accuracy (91.2%), task difficulty as measured by human response time shows strong correlation with VLM accuracy, indicating that SpinBench captures spatial reasoning challenges shared across humans and VLMs. We believe SpinBench provides critical insights into spatial reasoning in VLMs and highlights key gaps in their ability to reason about physical space. Our website can be found at https://spinbench25.github.io/.