RieMind: Geometry-Grounded Spatial Agent for Scene Understanding

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current vision-language models (VLMs) in performing precise metric and spatial reasoning for indoor scene understanding, where perception and reasoning are often tightly coupled. To overcome this, the authors propose an agent-based decoupled framework that integrates a large language model with an explicit 3D scene graph (3DSG), leveraging structured geometric primitives—such as distance, pose, and size—to enable high-fidelity spatial reasoning. Notably, the approach achieves separation between perception and reasoning without requiring task-specific fine-tuning, by utilizing a 3DSG constructed from real-world annotations for the first time. Evaluated on the VSI-Bench static benchmark, the method substantially outperforms existing approaches, yielding absolute improvements of 33%–50% over baseline VLMs and achieving a maximum lead of 16 percentage points.

Technology Category

Application Category

📝 Abstract
Visual Language Models (VLMs) have increasingly become the main paradigm for understanding indoor scenes, but they still struggle with metric and spatial reasoning. Current approaches rely on end-to-end video understanding or large-scale spatial question answering fine-tuning, inherently coupling perception and reasoning. In this paper, we investigate whether decoupling perception and reasoning leads to improved spatial reasoning. We propose an agentic framework for static 3D indoor scene reasoning that grounds an LLM in an explicit 3D scene graph (3DSG). Rather than ingesting videos directly, each scene is represented as a persistent 3DSG constructed by a dedicated perception module. To isolate reasoning performance, we instantiate the 3DSG from ground-truth annotations. The agent interacts with the scene exclusively through structured geometric tools that expose fundamental properties such as object dimensions, distances, poses, and spatial relationships. The results we obtain on the static split of VSI-Bench provide an upper bound under ideal perceptual conditions on the spatial reasoning performance, and we find that it is significantly higher than previous works, by up to 16\%, without task specific fine-tuning. Compared to base VLMs, our agentic variant achieves significantly better performance, with average improvements between 33\% to 50\%. These findings indicate that explicit geometric grounding substantially improves spatial reasoning performance, and suggest that structured representations offer a compelling alternative to purely end-to-end visual reasoning.
Problem

Research questions and friction points this paper is trying to address.

spatial reasoning
visual language models
metric reasoning
scene understanding
3D scene graph
Innovation

Methods, ideas, or system contributions that make the work stand out.

geometry-grounded reasoning
3D scene graph
spatial reasoning
perception-reasoning decoupling
structured representation
🔎 Similar Papers
No similar papers found.
F
Fernando Ropero
Riemann Lab, Huawei Technologies
E
Erkin Turkoz
Riemann Lab, Huawei Technologies
D
Daniel Matos
Riemann Lab, Huawei Technologies
J
Junqing Du
Riemann Lab, Huawei Technologies
A
Antonio Ruiz
Riemann Lab, Huawei Technologies
Yanfeng Zhang
Yanfeng Zhang
Northeastern University, China
Database SystemsMachine Learning Systems
L
Lu Liu
Riemann Lab, Huawei Technologies
M
Mingwei Sun
Riemann Lab, Huawei Technologies
Yongliang Wang
Yongliang Wang
Riemann Lab, Huawei Technologies.
PositioningNavigation3D reconstructionSpatial ComputingAutonomous Driving