🤖 AI Summary
Current generative AI models struggle to synthesize high-fidelity satellite imagery at scale for urban planning—particularly in simultaneously satisfying realism, practical utility, and constraints from land use, infrastructure, and natural environments. To address this, we propose the first ControlNet-augmented Stable Diffusion framework for controllable remote sensing image synthesis. Our approach introduces a novel OpenStreetMap–satellite image spatial alignment data paradigm, enabling cross-city generalization and fine-grained semantic control. By integrating multimodal conditional generation with geospatial inputs, the framework supports scenario customization and design exploration. Evaluated across three major U.S. cities, our method achieves significantly lower FID and KID scores than baseline methods. Expert evaluations by urban planners and public participants further confirm that generated imagery surpasses real satellite imagery in perceptual realism, visual diversity, and alignment with user intent.
📝 Abstract
Generative AI offers new opportunities for automating urban planning by creating site-specific urban layouts and enabling flexible design exploration. However, existing approaches often struggle to produce realistic and practical designs at scale. Therefore, we adapt a state-of-the-art Stable Diffusion model, extended with ControlNet, to generate high-fidelity satellite imagery conditioned on land use descriptions, infrastructure, and natural environments. To overcome data availability limitations, we spatially link satellite imagery with structured land use and constraint information from OpenStreetMap. Using data from three major U.S. cities, we demonstrate that the proposed diffusion model generates realistic and diverse urban landscapes by varying land-use configurations, road networks, and water bodies, facilitating cross-city learning and design diversity. We also systematically evaluate the impacts of varying language prompts and control imagery on the quality of satellite imagery generation. Our model achieves high FID and KID scores and demonstrates robustness across diverse urban contexts. Qualitative assessments from urban planners and the general public show that generated images align closely with design descriptions and constraints, and are often preferred over real images. This work establishes a benchmark for controlled urban imagery generation and highlights the potential of generative AI as a tool for enhancing planning workflows and public engagement.