🤖 AI Summary
Autonomous driving safety verification faces critical challenges: real-world road testing entails high risk and cost, while rare failure scenarios remain difficult to cover comprehensively. To address this, we propose a zero-shot failure-scenario generation method based on denoising diffusion models—requiring neither real-world driving data, annotations, nor internal knowledge of the system under test, but only traffic simulation environment modeling and unsupervised fault-pattern learning. We pioneer the application of conditional diffusion models to generate high-fidelity, diverse potential failure scenarios for autonomous driving, enabling low-resource, generalizable safety validation. Evaluated on a four-way intersection task, our method trains and infers efficiently on commodity GPUs, significantly improving coverage of rare failures and validation efficiency. This work establishes a scalable, cost-effective safety verification paradigm for autonomous driving systems.
📝 Abstract
Safety validation of autonomous driving systems is extremely challenging due to the high risks and costs of real-world testing as well as the rarity and diversity of potential failures. To address these challenges, we train a denoising diffusion model to generate potential failure cases of an autonomous vehicle given any initial traffic state. Experiments on a four-way intersection problem show that in a variety of scenarios, the diffusion model can generate realistic failure samples while capturing a wide variety of potential failures. Our model does not require any external training dataset, can perform training and inference with modest computing resources, and does not assume any prior knowledge of the system under test, with applicability to safety validation for traffic intersections.