🤖 AI Summary
Current generative AI systems for urban design suffer from weak human–AI collaboration, limited controllability, and neglect of the iterative nature of professional design workflows. To address these limitations, this paper proposes a three-stage progressive generation framework—road network and land-use planning → building massing → fine-grained rendering—explicitly aligned with domain-specific urban design practices. We integrate multimodal diffusion models into real-world design processes, enabling controllable generation under text/image constraints and supporting real-time human intervention. Our contributions include: (1) the first stage-wise generation paradigm grounded in urban design logic; (2) semantic-driven interactive editing capabilities; and (3) a novel 3D evaluation metric jointly optimizing fidelity, regulatory compliance, and diversity. Evaluated on Chicago and New York datasets, our method outperforms end-to-end baselines by +23.6% in design fidelity, +18.4% in compliance, and +31.2% in diversity.
📝 Abstract
Urban design is a multifaceted process that demands careful consideration of site-specific constraints and collaboration among diverse professionals and stakeholders. The advent of generative artificial intelligence (GenAI) offers transformative potential by improving the efficiency of design generation and facilitating the communication of design ideas. However, most existing approaches are not well integrated with human design workflows. They often follow end-to-end pipelines with limited control, overlooking the iterative nature of real-world design. This study proposes a stepwise generative urban design framework that integrates multimodal diffusion models with human expertise to enable more adaptive and controllable design processes. Instead of generating design outcomes in a single end-to-end process, the framework divides the process into three key stages aligned with established urban design workflows: (1) road network and land use planning, (2) building layout planning, and (3) detailed planning and rendering. At each stage, multimodal diffusion models generate preliminary designs based on textual prompts and image-based constraints, which can then be reviewed and refined by human designers. We design an evaluation framework to assess the fidelity, compliance, and diversity of the generated designs. Experiments using data from Chicago and New York City demonstrate that our framework outperforms baseline models and end-to-end approaches across all three dimensions. This study underscores the benefits of multimodal diffusion models and stepwise generation in preserving human control and facilitating iterative refinements, laying the groundwork for human-AI interaction in urban design solutions.