🤖 AI Summary
Existing spatial reasoning research is largely confined to indoor environments and multi-view inputs, limiting generalization to real-world monocular outdoor scenes. To address this, we introduce MonoSR—the first large-scale, open-vocabulary spatial reasoning dataset for monocular images—encompassing diverse settings including indoor, outdoor, and object-centric scenes, and supporting multi-type spatial relation question answering. We construct 3D spatial relation annotations directly from monocular images via multi-view alignment to ensure annotation fidelity, and systematically benchmark state-of-the-art vision-language models on monocular 3D spatial understanding. Experiments expose critical deficiencies in current models’ open-vocabulary and cross-scene spatial reasoning capabilities, while demonstrating the pivotal role of auxiliary cues (e.g., depth, layout). MonoSR establishes a new benchmark and provides principled design guidelines for monocular spatial reasoning.
📝 Abstract
Spatial reasoning (SR), the ability to infer 3D spatial information from 2D inputs, is essential for real-world applications such as embodied AI and autonomous driving. However, existing research primarily focuses on indoor environments and typically relies on multi-view observations, which limits their generalizability to outdoor scenarios and constrains their applicability to monocular images, the most common real-world setting. In this work, we propose MonoSR, a large-scale monocular spatial reasoning dataset that spans diverse scenarios including indoor, outdoor, and object-centric settings, and supports multiple question types. MonoSR provides a path toward open-world monocular spatial reasoning. Beyond introducing the dataset, we evaluate advanced vision-language models to reveal their limitations on this challenging task. We further analyze whether auxiliary information is crucial for monocular spatial reasoning and offer practical guidance for designing future models. These contributions collectively establish a foundation for advancing monocular spatial reasoning in real-world, open-world environments.