🤖 AI Summary
Current vision-language models (VLMs) exhibit significant deficiencies in spatial relation understanding—e.g., relative object positioning and ordering. To address this, we introduce RocketScience: the first open-source, contrastive vision-language benchmark for spatial reasoning, constructed from real-world scenes that are intuitive to humans but challenging for models. Methodologically, it employs image–text contrastive pairs and chain-of-thought–guided decomposition to disentangle and independently evaluate spatial reasoning capability from object localization ability—a novel contribution. Experiments reveal that state-of-the-art open-source and commercial VLMs perform poorly; in contrast, strong reasoning models—particularly those supporting chain-of-thought—achieve markedly superior performance, with the best model approaching human-level accuracy—yet still lagging by over 20 percentage points. This work identifies spatial reasoning as a fundamental bottleneck in contemporary VLMs and provides a reproducible evaluation framework alongside critical insights to guide future research.
📝 Abstract
We propose RocketScience, an open-source contrastive VLM benchmark that tests for spatial relation understanding. It is comprised of entirely new real-world image-text pairs covering mostly relative spatial understanding and the order of objects. The benchmark is designed
to be very easy for humans and hard for the current generation of VLMs, and this is empirically verified. Our results show a striking lack of spatial relation understanding in open source and frontier commercial VLMs and a surprisingly high performance of reasoning models. Additionally, we perform a disentanglement analysis to separate the contributions of object localization and spatial reasoning in chain-of-thought-based models and find that the performance on the benchmark is bottlenecked by spatial reasoning and not object localization capabilities.
We release the dataset with a CC-BY-4.0 license and make the evaluation code available at: https://github.com/nilshoehing/rocketscience