DriveGen3D: Boosting Feed-Forward Driving Scene Generation with Efficient Video Diffusion

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing driving scene synthesis methods face three key bottlenecks: high computational cost for long-sequence generation, absence of explicit 3D representations, and difficulty modeling dynamic, multi-scenario interactions. To address these, we propose FastDrive3D—a two-stage unified framework. In Stage I, a lightweight video diffusion Transformer (FastDrive-DiT) jointly conditioned on text and bird’s-eye-view (BEV) layouts enables high-resolution (424×800) video generation. In Stage II, a feed-forward temporal 3D Gaussian reconstruction module (FastRecon3D) efficiently produces spatiotemporally consistent dynamic 3D scenes. FastDrive3D achieves an optimal trade-off between parameter efficiency and generation fidelity, enabling real-time synthesis at 12 FPS. Quantitatively, novel-view reconstruction attains SSIM of 0.811 and PSNR of 22.84. The framework significantly enhances controllability, 3D spatial consistency, and temporal dynamics modeling compared to prior approaches.

Technology Category

Application Category

📝 Abstract
We present DriveGen3D, a novel framework for generating high-quality and highly controllable dynamic 3D driving scenes that addresses critical limitations in existing methodologies. Current approaches to driving scene synthesis either suffer from prohibitive computational demands for extended temporal generation, focus exclusively on prolonged video synthesis without 3D representation, or restrict themselves to static single-scene reconstruction. Our work bridges this methodological gap by integrating accelerated long-term video generation with large-scale dynamic scene reconstruction through multimodal conditional control. DriveGen3D introduces a unified pipeline consisting of two specialized components: FastDrive-DiT, an efficient video diffusion transformer for high-resolution, temporally coherent video synthesis under text and Bird's-Eye-View (BEV) layout guidance; and FastRecon3D, a feed-forward reconstruction module that rapidly builds 3D Gaussian representations across time, ensuring spatial-temporal consistency. Together, these components enable real-time generation of extended driving videos (up to $424 imes800$ at 12 FPS) and corresponding dynamic 3D scenes, achieving SSIM of 0.811 and PSNR of 22.84 on novel view synthesis, all while maintaining parameter efficiency.
Problem

Research questions and friction points this paper is trying to address.

Generating dynamic 3D driving scenes efficiently
Overcoming computational demands in extended video synthesis
Integrating video generation with 3D reconstruction consistently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient video diffusion transformer for coherent video synthesis
Feed-forward module for rapid 3D Gaussian scene reconstruction
Multimodal conditional control integrating text and BEV guidance
🔎 Similar Papers
No similar papers found.
Weijie Wang
Weijie Wang
PhD Student, Zhejiang University
Computer VisionEfficient AIDeep Learning
J
Jiagang Zhu
GigaAI
Z
Zeyu Zhang
GigaAI
X
Xiaofeng Wang
GigaAI
Z
Zheng Zhu
GigaAI
Guosheng Zhao
Guosheng Zhao
Institute of Automation, Chinese Academic of Scienes
C
Chaojun Ni
GigaAI
H
Haoxiao Wang
Zhejiang University
G
Guan Huang
GigaAI
Xinze Chen
Xinze Chen
Unknown affiliation
Y
Yukun Zhou
GigaAI
Wenkang Qin
Wenkang Qin
Peking University
D
Duochao Shi
Zhejiang University
Haoyun Li
Haoyun Li
Institute of Automation, Chinese Academy of Sciences
computer vision
G
Guanghong Jia
Tsinghua University
J
Jiwen Lu
Tsinghua University