🤖 AI Summary
To address the challenges of interactive rendering of large-scale 3D urban scenes on resource-constrained devices and the high noise levels and lack of semantic structure in automatically reconstructed models, this paper proposes a semantic-aware multi-scale Level-of-Detail (LOD) modeling framework. Our method introduces a semantics-driven hierarchical segmentation strategy and enforces cross-LOD geometric–semantic consistency constraints, enabling, for the first time, component-level editable LOD generation for buildings. We integrate graph neural networks, progressive mesh encoding, and semantic segmentation models, augmented by a spatial topology optimization algorithm that jointly simplifies geometry, semantics, and texture. Evaluated on Cityscapes-3D and Semantic3D, our approach reduces LOD reconstruction error by 37% and improves rendering frame rate by 3.2×, enabling real-time WebGL-based interactive visualization.