🤖 AI Summary
Existing 3D street-scene generation methods suffer from poor generalization and weak controllability in unbounded open-world settings (e.g., autonomous driving) and rely heavily on dense multi-view imagery, limiting applicability to real-world benchmarks like nuScenes. To address this, we propose an open-scene-oriented, multi-condition-controllable 3D generation framework. Our approach introduces a novel “generate-then-reconstruct” paradigm, jointly conditioning synthesis on bird’s-eye-view (BEV) maps, 3D object layouts, and textual descriptions. We further design a deformable Gaussian splatting model, incorporating monocular depth initialization and cross-view appearance modeling to effectively mitigate exposure inconsistency. The method enables high-fidelity, diverse 3D scene synthesis with arbitrary-view rendering. Evaluated on nuScenes, it achieves state-of-the-art visual quality and significantly improves downstream BEV segmentation performance. This work establishes a new paradigm for photorealistic autonomous-driving simulation.
📝 Abstract
While controllable generative models for images and videos have achieved remarkable success, high-quality models for 3D scenes, particularly in unbounded scenarios like autonomous driving, remain underdeveloped due to high data acquisition costs. In this paper, we introduce MagicDrive3D, a novel pipeline for controllable 3D street scene generation that supports multi-condition control, including BEV maps, 3D objects, and text descriptions. Unlike previous methods that reconstruct before training the generative models, MagicDrive3D first trains a video generation model and then reconstructs from the generated data. This innovative approach enables easily controllable generation and static scene acquisition, resulting in high-quality scene reconstruction. To address the minor errors in generated content, we propose deformable Gaussian splatting with monocular depth initialization and appearance modeling to manage exposure discrepancies across viewpoints. Validated on the nuScenes dataset, MagicDrive3D generates diverse, high-quality 3D driving scenes that support any-view rendering and enhance downstream tasks like BEV segmentation. Our results demonstrate the framework's superior performance, showcasing its potential for autonomous driving simulation and beyond.