3DCity-LLM: Empowering Multi-modality Large Language Models for 3D City-scale Perception and Understanding

๐Ÿ“… 2026-03-24
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing large vision-language models struggle to effectively comprehend 3D urban-scale scenes. To address this limitation, this work proposes 3DCity-LLM, a unified framework that innovatively designs a coarse-to-fine feature encoding mechanism with three parallel branchesโ€”target objects, inter-object relationships, and global scene context. The authors also construct the 3DCity-LLM-1.2M dataset, comprising 1.2 million high-quality samples that integrate explicit 3D geometric information with user-guided simulations to enhance the diversity and realism of question-answering pairs. Evaluated through a multidimensional protocol combining text similarity metrics and semantic assessments from large language models, the proposed method significantly outperforms state-of-the-art approaches on two benchmarks, thereby advancing spatial reasoning in 3D urban environments and fostering the development of urban intelligence.

Technology Category

Application Category

๐Ÿ“ Abstract
While multi-modality large language models excel in object-centric or indoor scenarios, scaling them to 3D city-scale environments remains a formidable challenge. To bridge this gap, we propose 3DCity-LLM, a unified framework designed for 3D city-scale vision-language perception and understanding. 3DCity-LLM employs a coarse-to-fine feature encoding strategy comprising three parallel branches for target object, inter-object relationship, and global scene. To facilitate large-scale training, we introduce 3DCity-LLM-1.2M dataset that comprises approximately 1.2 million high-quality samples across seven representative task categories, ranging from fine-grained object analysis to multi-faceted scene planning. This strictly quality-controlled dataset integrates explicit 3D numerical information and diverse user-oriented simulations, enriching the question-answering diversity and realism of urban scenarios. Furthermore, we apply a multi-dimensional protocol based on text-similarity metrics and LLM-based semantic assessment to ensure faithful and comprehensive evaluations for all methods. Extensive experiments on two benchmarks demonstrate that 3DCity-LLM significantly outperforms existing state-of-the-art methods, offering a promising and meaningful direction for advancing spatial reasoning and urban intelligence. The source code and dataset are available at https://github.com/SYSU-3DSTAILab/3D-City-LLM.
Problem

Research questions and friction points this paper is trying to address.

3D city-scale perception
multi-modality large language models
urban understanding
spatial reasoning
vision-language modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D city-scale understanding
multi-modality LLM
coarse-to-fine feature encoding
urban spatial reasoning
vision-language dataset
Yiping Chen
Yiping Chen
Sun Yat-sen University
Point CloudsMobile MappingGeomaticsLiDAR3D Vision
Jinpeng Li
Jinpeng Li
South China University of Technology
Machine LearningPattern RecognitionMedical Image AnalysisBrain-computer Interface
W
Wenyu Ke
School of Geospatial Engineering and Science, Sun Yat-sen University, Zhuhai, 519082, China
Y
Yang Luo
School of Geospatial Engineering and Science, Sun Yat-sen University, Zhuhai, 519082, China
J
Jie Ouyang
School of Geospatial Engineering and Science, Sun Yat-sen University, Zhuhai, 519082, China
Z
Zhongjie He
School of Geospatial Engineering and Science, Sun Yat-sen University, Zhuhai, 519082, China
L
Li Liu
College of Electronic Science, National University of Defense Technology, Changsha, 410000, China
H
Hongchao Fan
Department of Civil and Environmental Engineering, Norwegian University of Science and Technology, Trondheim, Norway
H
Hao Wu
National Geomatics Center of China, Beijing, 100830, China