๐ค AI Summary
Existing large vision-language models struggle to effectively comprehend 3D urban-scale scenes. To address this limitation, this work proposes 3DCity-LLM, a unified framework that innovatively designs a coarse-to-fine feature encoding mechanism with three parallel branchesโtarget objects, inter-object relationships, and global scene context. The authors also construct the 3DCity-LLM-1.2M dataset, comprising 1.2 million high-quality samples that integrate explicit 3D geometric information with user-guided simulations to enhance the diversity and realism of question-answering pairs. Evaluated through a multidimensional protocol combining text similarity metrics and semantic assessments from large language models, the proposed method significantly outperforms state-of-the-art approaches on two benchmarks, thereby advancing spatial reasoning in 3D urban environments and fostering the development of urban intelligence.
๐ Abstract
While multi-modality large language models excel in object-centric or indoor scenarios, scaling them to 3D city-scale environments remains a formidable challenge. To bridge this gap, we propose 3DCity-LLM, a unified framework designed for 3D city-scale vision-language perception and understanding. 3DCity-LLM employs a coarse-to-fine feature encoding strategy comprising three parallel branches for target object, inter-object relationship, and global scene. To facilitate large-scale training, we introduce 3DCity-LLM-1.2M dataset that comprises approximately 1.2 million high-quality samples across seven representative task categories, ranging from fine-grained object analysis to multi-faceted scene planning. This strictly quality-controlled dataset integrates explicit 3D numerical information and diverse user-oriented simulations, enriching the question-answering diversity and realism of urban scenarios. Furthermore, we apply a multi-dimensional protocol based on text-similarity metrics and LLM-based semantic assessment to ensure faithful and comprehensive evaluations for all methods. Extensive experiments on two benchmarks demonstrate that 3DCity-LLM significantly outperforms existing state-of-the-art methods, offering a promising and meaningful direction for advancing spatial reasoning and urban intelligence. The source code and dataset are available at https://github.com/SYSU-3DSTAILab/3D-City-LLM.