🤖 AI Summary
To address the limited photorealism and interactivity in urban visualization, this paper proposes a novel method integrating CityGML-based 3D city models with 360° street-level panoramic video. We design a video–model alignment algorithm incorporating 3D geometric registration and spatiotemporal synchronization to enable dynamic projection of panoramic video frames onto CityGML surfaces. This work constitutes the first systematic, real-time fusion of georeferenced panoramic video with standardized CityGML data, yielding a hybrid urban visualization that supports pedestrian-scale navigation. Experimental evaluation demonstrates significant improvements in spatial cognition accuracy and interactive immersion. The approach enables intuitive, multi-scale exploration of geographic information and semantic-aware browsing across registered 3D geometry and video content. By bridging high-fidelity imagery with semantically rich 3D city models, our framework provides an extensible technical pathway for digital twin applications in smart cities.
📝 Abstract
We introduce a novel urban visualization system that integrates 3D urban model (CityGML) and 360° walkthrough videos. By aligning the videos with the model and dynamically projecting relevant video frames onto the geometries, our system creates photorealistic urban visualizations, allowing users to intuitively interpret geospatial data from a pedestrian view.