DreamDrive: Generative 4D Scene Modeling from Street View Images

📅 2024-12-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D driving scene reconstruction methods rely on costly manual annotations or suffer from geometric distortions in generated outputs, limiting their utility for autonomous driving perception training. To address this, we propose the first 4D (3D + time) dynamic scene generation framework that synergistically integrates generative priors with reconstruction-based modeling. Given only unlabeled street-view images and ego-vehicle trajectories, our method employs a hybrid Gaussian representation to jointly model static and dynamic scene components, enabling spatiotemporally consistent neural rendering via self-supervised disentanglement. Furthermore, we enhance geometric fidelity by incorporating video diffusion priors and optimizing Gaussian splatting. Evaluated on benchmarks including nuScenes, our approach significantly improves fidelity, controllability, and generalization of 4D scene generation, while demonstrably boosting downstream perception and planning performance.

Technology Category

Application Category

📝 Abstract
Synthesizing photo-realistic visual observations from an ego vehicle's driving trajectory is a critical step towards scalable training of self-driving models. Reconstruction-based methods create 3D scenes from driving logs and synthesize geometry-consistent driving videos through neural rendering, but their dependence on costly object annotations limits their ability to generalize to in-the-wild driving scenarios. On the other hand, generative models can synthesize action-conditioned driving videos in a more generalizable way but often struggle with maintaining 3D visual consistency. In this paper, we present DreamDrive, a 4D spatial-temporal scene generation approach that combines the merits of generation and reconstruction, to synthesize generalizable 4D driving scenes and dynamic driving videos with 3D consistency. Specifically, we leverage the generative power of video diffusion models to synthesize a sequence of visual references and further elevate them to 4D with a novel hybrid Gaussian representation. Given a driving trajectory, we then render 3D-consistent driving videos via Gaussian splatting. The use of generative priors allows our method to produce high-quality 4D scenes from in-the-wild driving data, while neural rendering ensures 3D-consistent video generation from the 4D scenes. Extensive experiments on nuScenes and street view images demonstrate that DreamDrive can generate controllable and generalizable 4D driving scenes, synthesize novel views of driving videos with high fidelity and 3D consistency, decompose static and dynamic elements in a self-supervised manner, and enhance perception and planning tasks for autonomous driving.
Problem

Research questions and friction points this paper is trying to address.

3D scene reconstruction
autonomous driving
realistic training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Models
Neural Rendering
4D Driving Scenarios
🔎 Similar Papers
No similar papers found.