🤖 AI Summary
Current large 3D foundation models face two key bottlenecks: (1) dominant 3D datasets are constructed from global, disembodied viewpoints and lack embodied spatial context; (2) model architectures fail to establish explicit, fine-grained semantic alignment between 3D spatial representations and natural language. To address these limitations, we introduce Spartun3D—the first scalable, embodied 3D spatial understanding dataset—synthesized and annotated from ScanNet and Objaverse. We further propose Spartun3D-LLM, a novel multimodal large language model featuring an Embodied Spatial Alignment Module. This module leverages spatial-coordinate-aware feature mapping and a cross-modal alignment loss to achieve precise visual–language alignment over relative positioning, occlusion, and viewpoint-dependent spatial relations. Evaluated on multiple embodied spatial reasoning benchmarks, Spartun3D-LLM achieves an average accuracy improvement of 19.7% over prior methods.
📝 Abstract
Integrating the 3D world into large language models (3D-based LLMs) has been a promising research direction for 3D scene understanding. However, current 3D-based LLMs fall short in situated understanding due to two key limitations: 1) existing 3D datasets are constructed from a global perspective of the 3D scenes and lack situated context. 2) the architectures of existing 3D-based LLMs lack explicit alignment between the spatial representations of 3D scenes and natural language, limiting their performance in tasks requiring precise spatial reasoning. We address these issues by introducing a scalable situated 3D dataset, named Spartun3D, that incorporates various situated spatial reasoning tasks. Furthermore, we propose Spartun3D-LLM, built on an existing 3D-based LLM but integrated with a novel situated spatial alignment module, aiming to enhance the alignment between 3D visual representations and their corresponding textual descriptions. Experimental results demonstrate that both our proposed dataset and alignment module significantly enhance the situated spatial understanding of 3D-based LLMs.