MajutsuCity: Language-driven Aesthetic-adaptive City Generation with Controllable 3D Assets and Layouts

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to simultaneously achieve text-driven creative flexibility and object-level structural controllability, hindering synergistic improvements in stylistic diversity, geometric fidelity, and semantic consistency for 3D city generation. This paper introduces Majutsu—the first framework enabling natural-language-driven, style-adaptive, and structurally controllable 3D city synthesis. It employs a four-stage pipeline: text-to-semantic-layout generation, editable asset assembly, PBR material mapping, and ray-aligned rendering—guided jointly by height maps and semantic layouts. We design MajutsuAgent, an interactive editing agent supporting five object-level operations. Additionally, we curate MajutsuDataset, a high-quality multimodal dataset, and establish a comprehensive evaluation protocol covering structural accuracy, material realism, and lighting consistency. Experiments show Majutsu reduces layout FID by 83.7% over CityDreamer and by 20.1% over CityCraft, while achieving state-of-the-art AQS and RDR scores—significantly advancing controllable 3D urban generation.

Technology Category

Application Category

📝 Abstract
Generating realistic 3D cities is fundamental to world models, virtual reality, and game development, where an ideal urban scene must satisfy both stylistic diversity, fine-grained, and controllability. However, existing methods struggle to balance the creative flexibility offered by text-based generation with the object-level editability enabled by explicit structural representations. We introduce MajutsuCity, a natural language-driven and aesthetically adaptive framework for synthesizing structurally consistent and stylistically diverse 3D urban scenes. MajutsuCity represents a city as a composition of controllable layouts, assets, and materials, and operates through a four-stage pipeline. To extend controllability beyond initial generation, we further integrate MajutsuAgent, an interactive language-grounded editing agent} that supports five object-level operations. To support photorealistic and customizable scene synthesis, we also construct MajutsuDataset, a high-quality multimodal dataset} containing 2D semantic layouts and height maps, diverse 3D building assets, and curated PBR materials and skyboxes, each accompanied by detailed annotations. Meanwhile, we develop a practical set of evaluation metrics, covering key dimensions such as structural consistency, scene complexity, material fidelity, and lighting atmosphere. Extensive experiments demonstrate MajutsuCity reduces layout FID by 83.7% compared with CityDreamer and by 20.1% over CityCraft. Our method ranks first across all AQS and RDR scores, outperforming existing methods by a clear margin. These results confirm MajutsuCity as a new state-of-the-art in geometric fidelity, stylistic adaptability, and semantic controllability for 3D city generation. We expect our framework can inspire new avenues of research in 3D city generation. Our dataset and code will be released at https://github.com/LongHZ140516/MajutsuCity.
Problem

Research questions and friction points this paper is trying to address.

Balancing creative text-based generation with object-level structural editability
Achieving stylistic diversity while maintaining structural consistency in 3D cities
Extending controllability beyond initial generation through interactive editing operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language-driven framework for aesthetic-adaptive 3D city generation
Four-stage pipeline with controllable layouts, assets, and materials
Interactive language-grounded agent for object-level editing operations
🔎 Similar Papers
No similar papers found.