🤖 AI Summary
Existing urban layout generation methods suffer from either geometric discontinuity (in image-based approaches) or semantic incompleteness (in graph-based approaches), hindering simultaneous scalability and controllability in large-scale 3D urban modeling. To address this, we propose a novel graph-structured representation that unifies geometric continuity with parcel-level semantics: we extend 2D parcel graphs into semantically enriched 3D layout graphs by introducing weighted topological edges and building-height embeddings. Furthermore, we design a semantic-conditioned graph neural network that jointly models parcel topology, geometric constraints, and user-specified semantic labels. Experiments demonstrate that our method generates layouts exhibiting both geometric plausibility and semantic coherence across large-scale urban scenes. It significantly improves modeling scalability and interactive controllability, making it suitable for applications in urban planning, virtual scene synthesis, and game development.
📝 Abstract
Urban modeling is essential for city planning, scene synthesis, and gaming. Existing image-based methods generate diverse layouts but often lack geometric continuity and scalability, while graph-based methods capture structural relations yet overlook parcel semantics. We present a controllable framework for large-scale 3D vector urban layout generation, conditioned on both geometry and semantics. By fusing geometric and semantic attributes, introducing edge weights, and embedding building height in the graph, our method extends 2D layouts to realistic 3D structures. It also enables users to directly control the output by modifying semantic attributes. Experiments show that it produces valid, large-scale urban models, offering an effective tool for data-driven planning and design.