🤖 AI Summary
Vision-language models (VLMs) exhibit limited robustness in spatial understanding and reasoning tasks, primarily due to their inability to implicitly recover 3D geometric structure from 2D images.
Method: We propose the first unified framework that natively embeds 3D visual priors into VLMs, establishing a geometrically grounded architecture that jointly optimizes multi-view image/video-driven 3D reconstruction and contextual spatial reasoning. Our approach integrates explicit 3D geometric constraints, interleaved reasoning mechanisms, and in-context learning to unify 3D attribute prediction and spatial relation modeling.
Contribution/Results: Experiments demonstrate that our model matches state-of-the-art feedforward methods on standard 3D reconstruction benchmarks while achieving SOTA or leading performance on multiple spatial reasoning benchmarks—including SQA and SpatialBench—significantly enhancing the generalization capability of VLMs for spatial intelligence.
📝 Abstract
Vision-Language Models (VLMs) still lack robustness in spatial intelligence, demonstrating poor performance on spatial understanding and reasoning tasks. We attribute this gap to the absence of a visual geometry learning process capable of reconstructing 3D space from 2D images. We present G$^2$VLM, a geometry grounded vision-language model that bridges two fundamental aspects of spatial intelligence: spatial 3D reconstruction and spatial understanding. G$^2$VLM natively leverages learned 3D visual geometry features to directly predict 3D attributes and enhance spatial reasoning tasks via in-context learning and interleaved reasoning. Our unified design is highly scalable for spatial understanding: it trains on abundant multi-view image and video data, while simultaneously leveraging the benefits of 3D visual priors that are typically only derived from hard-to-collect annotations. Experimental results demonstrate G$^2$VLM is proficient in both tasks, achieving comparable results to state-of-the-art feed-forward 3D reconstruction models and achieving better or competitive results across spatial understanding and reasoning tasks. By unifying a semantically strong VLM with low-level 3D vision tasks, we hope G$^2$VLM can serve as a strong baseline for the community and unlock more future applications, such as 3D scene editing.