Spatial 3D-LLM: Exploring Spatial Awareness in 3D Vision-Language Models

๐Ÿ“… 2025-07-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing 3D multimodal large language models (MLLMs) heavily rely on scene compression or object segmentation, limiting their capacity to faithfully model the geometric and topological structure of 3D space, thereby impairing spatial perception. To address this, we propose a progressive spatial awareness mechanism that constructs position-sensitive and distance-aware 3D scene embeddings via hierarchical encoding, explicitly modeling inter-object depth, metric distance, and spatial layout relationships. We introduce two novel tasksโ€”3D object distance measurement and layout editingโ€”to foster fine-grained understanding of spatial semantics. Furthermore, we integrate a large language model with a dedicated 3D spatial encoder within a unified architecture, leveraging vision-guided prompting and a self-constructed 3D instruction-tuning dataset. Experiments demonstrate state-of-the-art performance across multiple 3D vision-language benchmarks, with substantial gains in reasoning about complex spatial relations.

Technology Category

Application Category

๐Ÿ“ Abstract
New era has unlocked exciting possibilities for extending Large Language Models (LLMs) to tackle 3D vision-language tasks. However, most existing 3D multimodal LLMs (MLLMs) rely on compressing holistic 3D scene information or segmenting independent objects to perform these tasks, which limits their spatial awareness due to insufficient representation of the richness inherent in 3D scenes. To overcome these limitations, we propose Spatial 3D-LLM, a 3D MLLM specifically designed to enhance spatial awareness for 3D vision-language tasks by enriching the spatial embeddings of 3D scenes. Spatial 3D-LLM integrates an LLM backbone with a progressive spatial awareness scheme that progressively captures spatial information as the perception field expands, generating location-enriched 3D scene embeddings to serve as visual prompts. Furthermore, we introduce two novel tasks: 3D object distance measurement and 3D layout editing, and construct a 3D instruction dataset, MODEL, to evaluate the model's spatial awareness capabilities. Experimental results demonstrate that Spatial 3D-LLM achieves state-of-the-art performance across a wide range of 3D vision-language tasks, revealing the improvements stemmed from our progressive spatial awareness scheme of mining more profound spatial information. Our code is available at https://github.com/bjshuyuan/Spatial-3D-LLM.
Problem

Research questions and friction points this paper is trying to address.

Enhancing spatial awareness in 3D vision-language models
Overcoming limitations of holistic 3D scene compression
Introducing tasks for 3D object distance and layout
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhances spatial awareness with enriched embeddings
Uses progressive spatial awareness scheme
Introduces 3D distance and layout tasks
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xiaoyan Wang
Beijing Digital Native Digital City Research Center
Z
Zeju Li
Beijing Digital Native Digital City Research Center, The Chinese University of Hong Kong
Y
Yifan Xu
Beijing Digital Native Digital City Research Center
Jiaxing Qi
Jiaxing Qi
BUAA
AIOpsSoftware EngineeringData MiningAI4Science
Zhifei Yang
Zhifei Yang
Peking University
3D GenerationGenerative Models
R
Ruifei Ma
Beijing Digital Native Digital City Research Center
X
Xiangde Liu
Beijing Digital Native Digital City Research Center
C
Chao Zhang
Beijing Digital Native Digital City Research Center