Boosting MLLM Spatial Reasoning with Geometrically Referenced 3D Scene Representations

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited capability of multimodal large language models (MLLMs) in 3D spatial reasoning and their difficulty in comprehending three-dimensional scene structure. The authors propose Geometry-Referenced 3D Scene Representation (GR3D), a novel approach that assigns unique identifiers to objects in images and encodes their 3D geometric properties into language-readable text. This is the first method to embed explicit 3D geometric information directly into MLLM inputs without requiring additional training, thereby enabling zero-shot 3D spatial reasoning. By integrating 2D visual features with linguistic reasoning, GR3D substantially enhances model performance: it improves GPT-5’s overall accuracy by 8% on VSI-Bench and achieves gains exceeding 11% on tasks heavily reliant on spatial layout, while also supporting complex spatial reasoning from sparse viewpoints.

Technology Category

Application Category

📝 Abstract
While Multimodal Large Language Models (MLLMs) have achieved remarkable success in 2D visual understanding, their ability to reason about 3D space remains limited. To address this gap, we introduce geometrically referenced 3D scene representations (GR3D). Given a set of input images, GR3D annotates objects in the images with unique IDs and encodes their 3D geometric attributes as textual references indexed by these IDs. This representation enables MLLMs to interpret 3D cues using their advanced language-based skills in mathematical reasoning, while concurrently analyzing 2D visual features in a tightly coupled way. We present a simple yet effective approach based on GR3D, which requires no additional training and is readily applicable to different MLLMs. Implemented in a zero-shot setting, our approach boosts GPT-5's performance on VSI-Bench by 8% overall and more than 11% on tasks that rely heavily on spatial layout understanding. Qualitative studies further demonstrate that GR3D empowers MLLMs to perform complex spatial reasoning with highly sparse input views.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
3D spatial reasoning
geometric representation
visual understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

geometrically referenced 3D representation
multimodal large language models
spatial reasoning
zero-shot 3D understanding
3D scene representation
🔎 Similar Papers
No similar papers found.