🤖 AI Summary
Existing large-scale 3D driving scene generation methods suffer from either weak geometric grounding (e.g., diffusion models) or poor controllability and limited diversity (e.g., neural reconstruction). To address these limitations, we propose a novel framework integrating proxy geometry modeling with score distillation. Specifically, conditioned on promptable map layouts, we distill 2D image diffusion priors into an explicit 3D geometric representation via differentiable rendering—ensuring object persistence and causal consistency while enabling geometrically coherent novel-view synthesis. To our knowledge, this is the first approach supporting map-level semantic guidance for large-scale scene generation, simultaneously achieving high controllability, accurate 3D structural estimation, and photorealistic visual quality. Experiments demonstrate significant improvements over state-of-the-art methods in geometric fidelity, layout controllability, and cross-view consistency.
📝 Abstract
Large-scale scene data is essential for training and testing in robot learning. Neural reconstruction methods have promised the capability of reconstructing large physically-grounded outdoor scenes from captured sensor data. However, these methods have baked-in static environments and only allow for limited scene control -- they are functionally constrained in scene and trajectory diversity by the captures from which they are reconstructed. In contrast, generating driving data with recent image or video diffusion models offers control, however, at the cost of geometry grounding and causality. In this work, we aim to bridge this gap and present a method that directly generates large-scale 3D driving scenes with accurate geometry, allowing for causal novel view synthesis with object permanence and explicit 3D geometry estimation. The proposed method combines the generation of a proxy geometry and environment representation with score distillation from learned 2D image priors. We find that this approach allows for high controllability, enabling the prompt-guided geometry and high-fidelity texture and structure that can be conditioned on map layouts -- producing realistic and geometrically consistent 3D generations of complex driving scenes.