MonoSR: Open-Vocabulary Spatial Reasoning from Monocular Images

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing spatial reasoning research is largely confined to indoor environments and multi-view inputs, limiting generalization to real-world monocular outdoor scenes. To address this, we introduce MonoSR—the first large-scale, open-vocabulary spatial reasoning dataset for monocular images—encompassing diverse settings including indoor, outdoor, and object-centric scenes, and supporting multi-type spatial relation question answering. We construct 3D spatial relation annotations directly from monocular images via multi-view alignment to ensure annotation fidelity, and systematically benchmark state-of-the-art vision-language models on monocular 3D spatial understanding. Experiments expose critical deficiencies in current models’ open-vocabulary and cross-scene spatial reasoning capabilities, while demonstrating the pivotal role of auxiliary cues (e.g., depth, layout). MonoSR establishes a new benchmark and provides principled design guidelines for monocular spatial reasoning.

Technology Category

Application Category

📝 Abstract
Spatial reasoning (SR), the ability to infer 3D spatial information from 2D inputs, is essential for real-world applications such as embodied AI and autonomous driving. However, existing research primarily focuses on indoor environments and typically relies on multi-view observations, which limits their generalizability to outdoor scenarios and constrains their applicability to monocular images, the most common real-world setting. In this work, we propose MonoSR, a large-scale monocular spatial reasoning dataset that spans diverse scenarios including indoor, outdoor, and object-centric settings, and supports multiple question types. MonoSR provides a path toward open-world monocular spatial reasoning. Beyond introducing the dataset, we evaluate advanced vision-language models to reveal their limitations on this challenging task. We further analyze whether auxiliary information is crucial for monocular spatial reasoning and offer practical guidance for designing future models. These contributions collectively establish a foundation for advancing monocular spatial reasoning in real-world, open-world environments.
Problem

Research questions and friction points this paper is trying to address.

Advancing monocular spatial reasoning for real-world open-world environments
Overcoming limitations of indoor-focused multi-view spatial reasoning approaches
Enabling 3D spatial inference from single 2D images across diverse scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Monocular image spatial reasoning dataset
Diverse indoor outdoor object scenarios
Vision language model evaluation guidance
🔎 Similar Papers
No similar papers found.
Q
Qirui Wang
Technical University of Munich
J
Jingyi He
Technical University of Munich
Y
Yining Pan
Institute for Infocomm Research (I2R), A*STAR, Singapore
Si Yong Yeo
Si Yong Yeo
Asst. Professor, Nanyang Technological University
Computer VisionMedical InformaticsArtificial IntelligenceMedical ImagingMedical Devices
Xulei Yang
Xulei Yang
Principal Scientist & Group Leader, A*STAR, Singapore
3D VisionArtificial IntelligenceMedical Imaging
S
Shijie Li
Institute for Infocomm Research (I2R), A*STAR, Singapore