🤖 AI Summary
To address key bottlenecks in automated 3D indoor scene synthesis—namely low visual fidelity, semantic inconsistency, weak user controllability, and scarcity of high-quality training data—this paper proposes SpatialGen, a layout-guided controllable generation framework. Methodologically, it introduces (1) the first large-scale, multimodal synthetic dataset specifically designed for indoor scene synthesis; (2) a multi-view, multimodal diffusion model that jointly models geometry, appearance, and semantics, enforcing cross-modal spatial consistency via scene-coordinate mapping and text-image guidance; and (3) fine-grained conditional generation supported by both 3D layouts and reference images. Experiments demonstrate significant improvements over state-of-the-art methods in visual fidelity, diversity, and semantic plausibility. The code, dataset, and pretrained models are publicly released.
📝 Abstract
Creating high-fidelity 3D models of indoor environments is essential for applications in design, virtual reality, and robotics. However, manual 3D modeling remains time-consuming and labor-intensive. While recent advances in generative AI have enabled automated scene synthesis, existing methods often face challenges in balancing visual quality, diversity, semantic consistency, and user control. A major bottleneck is the lack of a large-scale, high-quality dataset tailored to this task. To address this gap, we introduce a comprehensive synthetic dataset, featuring 12,328 structured annotated scenes with 57,440 rooms, and 4.7M photorealistic 2D renderings. Leveraging this dataset, we present SpatialGen, a novel multi-view multi-modal diffusion model that generates realistic and semantically consistent 3D indoor scenes. Given a 3D layout and a reference image (derived from a text prompt), our model synthesizes appearance (color image), geometry (scene coordinate map), and semantic (semantic segmentation map) from arbitrary viewpoints, while preserving spatial consistency across modalities. SpatialGen consistently generates superior results to previous methods in our experiments. We are open-sourcing our data and models to empower the community and advance the field of indoor scene understanding and generation.