DrivingGen: A Comprehensive Benchmark for Generative Video World Models in Autonomous Driving

πŸ“… 2026-01-04
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the absence of a comprehensive evaluation benchmark for generative video world models in autonomous driving, where existing approaches inadequately assess critical aspects such as safety, trajectory plausibility, temporal consistency, and controllability. To bridge this gap, the authors introduce the first multidimensional benchmark tailored to this task, featuring a diverse dataset encompassing variations in weather, time of day, geographic regions, and complex driving behaviors. They further propose a novel evaluation framework that integrates metrics for visual realism, physical plausibility of trajectories, temporal coherence, and conditional control over the ego vehicle’s actions. Through systematic evaluation of 14 state-of-the-art models, the study reveals a fundamental trade-off between visual fidelity and physical realism, offering a unified, safety-oriented standard for assessing deployable driving world models.

Technology Category

Application Category

πŸ“ Abstract
Video generation models, as one form of world models, have emerged as one of the most exciting frontiers in AI, promising agents the ability to imagine the future by modeling the temporal evolution of complex scenes. In autonomous driving, this vision gives rise to driving world models: generative simulators that imagine ego and agent futures, enabling scalable simulation, safe testing of corner cases, and rich synthetic data generation. Yet, despite fast-growing research activity, the field lacks a rigorous benchmark to measure progress and guide priorities. Existing evaluations remain limited: generic video metrics overlook safety-critical imaging factors; trajectory plausibility is rarely quantified; temporal and agent-level consistency is neglected; and controllability with respect to ego conditioning is ignored. Moreover, current datasets fail to cover the diversity of conditions required for real-world deployment. To address these gaps, we present DrivingGen, the first comprehensive benchmark for generative driving world models. DrivingGen combines a diverse evaluation dataset curated from both driving datasets and internet-scale video sources, spanning varied weather, time of day, geographic regions, and complex maneuvers, with a suite of new metrics that jointly assess visual realism, trajectory plausibility, temporal coherence, and controllability. Benchmarking 14 state-of-the-art models reveals clear trade-offs: general models look better but break physics, while driving-specific ones capture motion realistically but lag in visual quality. DrivingGen offers a unified evaluation framework to foster reliable, controllable, and deployable driving world models, enabling scalable simulation, planning, and data-driven decision-making.
Problem

Research questions and friction points this paper is trying to address.

generative video world models
autonomous driving
benchmark
trajectory plausibility
controllability
Innovation

Methods, ideas, or system contributions that make the work stand out.

generative video world models
autonomous driving simulation
comprehensive benchmark
trajectory plausibility
controllability
πŸ”Ž Similar Papers
No similar papers found.