🤖 AI Summary
Existing driving world models prioritize generative fidelity and controllability but overlook their practical utility for downstream perception tasks—particularly extreme-scenario detection. This paper introduces Dream4Drive, the first driving world model explicitly designed as a perception-oriented synthetic data generator. It employs 3D perception-guided graph modeling and multi-view rendering to produce high-fidelity, editable RGB and multimodal video sequences. We further release DriveObj3D, the first large-scale 3D driving asset dataset. Experiments demonstrate that perception models trained solely on Dream4Drive-synthesized data consistently outperform real-data baselines—under both identical and double the real-data training epochs—with particularly pronounced gains in extreme-case recognition. This work establishes a rigorous validation paradigm for assessing the efficacy of synthetic data in autonomous driving perception.
📝 Abstract
Recent advancements in driving world models enable controllable generation of high-quality RGB videos or multimodal videos. Existing methods primarily focus on metrics related to generation quality and controllability. However, they often overlook the evaluation of downstream perception tasks, which are $mathbf{really crucial}$ for the performance of autonomous driving. Existing methods usually leverage a training strategy that first pretrains on synthetic data and finetunes on real data, resulting in twice the epochs compared to the baseline (real data only). When we double the epochs in the baseline, the benefit of synthetic data becomes negligible. To thoroughly demonstrate the benefit of synthetic data, we introduce Dream4Drive, a novel synthetic data generation framework designed for enhancing the downstream perception tasks. Dream4Drive first decomposes the input video into several 3D-aware guidance maps and subsequently renders the 3D assets onto these guidance maps. Finally, the driving world model is fine-tuned to produce the edited, multi-view photorealistic videos, which can be used to train the downstream perception models. Dream4Drive enables unprecedented flexibility in generating multi-view corner cases at scale, significantly boosting corner case perception in autonomous driving. To facilitate future research, we also contribute a large-scale 3D asset dataset named DriveObj3D, covering the typical categories in driving scenarios and enabling diverse 3D-aware video editing. We conduct comprehensive experiments to show that Dream4Drive can effectively boost the performance of downstream perception models under various training epochs. Project: $href{https://wm-research.github.io/Dream4Drive/}{this https URL}$