🤖 AI Summary
Existing methods struggle to simultaneously achieve text-driven creative flexibility and object-level structural controllability, hindering synergistic improvements in stylistic diversity, geometric fidelity, and semantic consistency for 3D city generation. This paper introduces Majutsu—the first framework enabling natural-language-driven, style-adaptive, and structurally controllable 3D city synthesis. It employs a four-stage pipeline: text-to-semantic-layout generation, editable asset assembly, PBR material mapping, and ray-aligned rendering—guided jointly by height maps and semantic layouts. We design MajutsuAgent, an interactive editing agent supporting five object-level operations. Additionally, we curate MajutsuDataset, a high-quality multimodal dataset, and establish a comprehensive evaluation protocol covering structural accuracy, material realism, and lighting consistency. Experiments show Majutsu reduces layout FID by 83.7% over CityDreamer and by 20.1% over CityCraft, while achieving state-of-the-art AQS and RDR scores—significantly advancing controllable 3D urban generation.
📝 Abstract
Generating realistic 3D cities is fundamental to world models, virtual reality, and game development, where an ideal urban scene must satisfy both stylistic diversity, fine-grained, and controllability. However, existing methods struggle to balance the creative flexibility offered by text-based generation with the object-level editability enabled by explicit structural representations. We introduce MajutsuCity, a natural language-driven and aesthetically adaptive framework for synthesizing structurally consistent and stylistically diverse 3D urban scenes. MajutsuCity represents a city as a composition of controllable layouts, assets, and materials, and operates through a four-stage pipeline. To extend controllability beyond initial generation, we further integrate MajutsuAgent, an interactive language-grounded editing agent} that supports five object-level operations. To support photorealistic and customizable scene synthesis, we also construct MajutsuDataset, a high-quality multimodal dataset} containing 2D semantic layouts and height maps, diverse 3D building assets, and curated PBR materials and skyboxes, each accompanied by detailed annotations. Meanwhile, we develop a practical set of evaluation metrics, covering key dimensions such as structural consistency, scene complexity, material fidelity, and lighting atmosphere. Extensive experiments demonstrate MajutsuCity reduces layout FID by 83.7% compared with CityDreamer and by 20.1% over CityCraft. Our method ranks first across all AQS and RDR scores, outperforming existing methods by a clear margin. These results confirm MajutsuCity as a new state-of-the-art in geometric fidelity, stylistic adaptability, and semantic controllability for 3D city generation. We expect our framework can inspire new avenues of research in 3D city generation. Our dataset and code will be released at https://github.com/LongHZ140516/MajutsuCity.