Rethinking Driving World Model as Synthetic Data Generator for Perception Tasks

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing driving world models prioritize generative fidelity and controllability but overlook their practical utility for downstream perception tasks—particularly extreme-scenario detection. This paper introduces Dream4Drive, the first driving world model explicitly designed as a perception-oriented synthetic data generator. It employs 3D perception-guided graph modeling and multi-view rendering to produce high-fidelity, editable RGB and multimodal video sequences. We further release DriveObj3D, the first large-scale 3D driving asset dataset. Experiments demonstrate that perception models trained solely on Dream4Drive-synthesized data consistently outperform real-data baselines—under both identical and double the real-data training epochs—with particularly pronounced gains in extreme-case recognition. This work establishes a rigorous validation paradigm for assessing the efficacy of synthetic data in autonomous driving perception.

Technology Category

Application Category

📝 Abstract
Recent advancements in driving world models enable controllable generation of high-quality RGB videos or multimodal videos. Existing methods primarily focus on metrics related to generation quality and controllability. However, they often overlook the evaluation of downstream perception tasks, which are $mathbf{really crucial}$ for the performance of autonomous driving. Existing methods usually leverage a training strategy that first pretrains on synthetic data and finetunes on real data, resulting in twice the epochs compared to the baseline (real data only). When we double the epochs in the baseline, the benefit of synthetic data becomes negligible. To thoroughly demonstrate the benefit of synthetic data, we introduce Dream4Drive, a novel synthetic data generation framework designed for enhancing the downstream perception tasks. Dream4Drive first decomposes the input video into several 3D-aware guidance maps and subsequently renders the 3D assets onto these guidance maps. Finally, the driving world model is fine-tuned to produce the edited, multi-view photorealistic videos, which can be used to train the downstream perception models. Dream4Drive enables unprecedented flexibility in generating multi-view corner cases at scale, significantly boosting corner case perception in autonomous driving. To facilitate future research, we also contribute a large-scale 3D asset dataset named DriveObj3D, covering the typical categories in driving scenarios and enabling diverse 3D-aware video editing. We conduct comprehensive experiments to show that Dream4Drive can effectively boost the performance of downstream perception models under various training epochs. Project: $href{https://wm-research.github.io/Dream4Drive/}{this https URL}$
Problem

Research questions and friction points this paper is trying to address.

Enhancing autonomous driving perception through synthetic data generation
Addressing corner case perception limitations in existing driving models
Reducing dependency on real data by improving synthetic data utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates multi-view videos using 3D-aware guidance maps
Fine-tunes world models for photorealistic synthetic data
Enhances perception models by creating corner case scenarios
🔎 Similar Papers
No similar papers found.
K
Kai Zeng
Peking University
Z
Zhanqian Wu
Xiaomi EV
K
Kaixin Xiong
Xiaomi EV
Xiaobao Wei
Xiaobao Wei
Institute of Software, Chinese Academy of Sciences
3D Vision
X
Xiangyu Guo
Huazhong University of Science and Technology
Zhenxin Zhu
Zhenxin Zhu
Xiaomi AD
AIGCNeRF
K
Kalok Ho
Xiaomi EV
Lijun Zhou
Lijun Zhou
Xiaomi Corporation
Bohan Zeng
Bohan Zeng
PhD student, Peking University
Data-Centric AIComputer VisionDiffusion Model3D
M
Ming Lu
Xiaomi EV
H
Haiyang Sun
Xiaomi EV
B
Bing Wang
Xiaomi EV
G
Guang Chen
Xiaomi EV
H
Hangjun Ye
Xiaomi EV
Wentao Zhang
Wentao Zhang
Institute of Physics, Chinese Academy of Sciences
photoemissionsuperconductivitycupratehtsctime-resolved