StreetCrafter: Street View Synthesis with Controllable Video Diffusion Models

📅 2024-12-17
🏛️ arXiv.org
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the severe degradation of neural rendering quality under large viewpoint shifts in autonomous driving scenarios. We propose a LiDAR-conditioned controllable video diffusion model. Methodologically, we pioneer the integration of LiDAR depth maps as pixel-wise geometric priors into the diffusion process, synergizing neural dynamic scene representations with multi-scale spatiotemporal attention to achieve geometrically consistent and photorealistic novel-view street scene synthesis and real-time editing. Our contributions are: (1) the first formulation of sparse LiDAR signals as pixel-level geometric constraints, significantly enhancing robustness to camera pose perturbations; and (2) embedding generative priors into dynamic scene representations to jointly ensure fidelity and controllability. Evaluated on Waymo and PandaSet, our method substantially extends the effective view synthesis range and achieves state-of-the-art performance on large-angle extrapolation and fine-grained editing tasks.

Technology Category

Application Category

📝 Abstract
This paper aims to tackle the problem of photorealistic view synthesis from vehicle sensor data. Recent advancements in neural scene representation have achieved notable success in rendering high-quality autonomous driving scenes, but the performance significantly degrades as the viewpoint deviates from the training trajectory. To mitigate this problem, we introduce StreetCrafter, a novel controllable video diffusion model that utilizes LiDAR point cloud renderings as pixel-level conditions, which fully exploits the generative prior for novel view synthesis, while preserving precise camera control. Moreover, the utilization of pixel-level LiDAR conditions allows us to make accurate pixel-level edits to target scenes. In addition, the generative prior of StreetCrafter can be effectively incorporated into dynamic scene representations to achieve real-time rendering. Experiments on Waymo Open Dataset and PandaSet demonstrate that our model enables flexible control over viewpoint changes, enlarging the view synthesis regions for satisfying rendering, which outperforms existing methods.
Problem

Research questions and friction points this paper is trying to address.

Photorealistic view synthesis from vehicle sensor data
Mitigating performance degradation with novel viewpoint deviations
Enabling precise pixel-level edits using LiDAR conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Controllable video diffusion model for view synthesis
Uses LiDAR point clouds as pixel-level conditions
Enables real-time rendering with generative prior
🔎 Similar Papers
No similar papers found.