🤖 AI Summary
This work addresses the limitations of current vision-language models (VLMs) in performing precise metric and spatial reasoning for indoor scene understanding, where perception and reasoning are often tightly coupled. To overcome this, the authors propose an agent-based decoupled framework that integrates a large language model with an explicit 3D scene graph (3DSG), leveraging structured geometric primitives—such as distance, pose, and size—to enable high-fidelity spatial reasoning. Notably, the approach achieves separation between perception and reasoning without requiring task-specific fine-tuning, by utilizing a 3DSG constructed from real-world annotations for the first time. Evaluated on the VSI-Bench static benchmark, the method substantially outperforms existing approaches, yielding absolute improvements of 33%–50% over baseline VLMs and achieving a maximum lead of 16 percentage points.
📝 Abstract
Visual Language Models (VLMs) have increasingly become the main paradigm for understanding indoor scenes, but they still struggle with metric and spatial reasoning. Current approaches rely on end-to-end video understanding or large-scale spatial question answering fine-tuning, inherently coupling perception and reasoning. In this paper, we investigate whether decoupling perception and reasoning leads to improved spatial reasoning. We propose an agentic framework for static 3D indoor scene reasoning that grounds an LLM in an explicit 3D scene graph (3DSG). Rather than ingesting videos directly, each scene is represented as a persistent 3DSG constructed by a dedicated perception module. To isolate reasoning performance, we instantiate the 3DSG from ground-truth annotations. The agent interacts with the scene exclusively through structured geometric tools that expose fundamental properties such as object dimensions, distances, poses, and spatial relationships. The results we obtain on the static split of VSI-Bench provide an upper bound under ideal perceptual conditions on the spatial reasoning performance, and we find that it is significantly higher than previous works, by up to 16\%, without task specific fine-tuning. Compared to base VLMs, our agentic variant achieves significantly better performance, with average improvements between 33\% to 50\%. These findings indicate that explicit geometric grounding substantially improves spatial reasoning performance, and suggest that structured representations offer a compelling alternative to purely end-to-end visual reasoning.