Adversarial Generation and Collaborative Evolution of Safety-Critical Scenarios for Autonomous Vehicles

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing autonomous driving safety evaluation relies on predefined threat patterns, limiting its ability to uncover diverse and unforeseen failure scenarios. To address this, we propose ScenGE—a novel framework that pioneers the use of large language models (LLMs) for meta-scenario reasoning, integrating structured driving knowledge modeling with executable scenario code generation. ScenGE constructs an adversarial collaborator graph to drive background vehicle co-evolution, dynamically constraining the ego-vehicle’s navigable space and introducing critical occlusions. It further combines graph neural network–based optimization with multi-agent co-evolution to enable cross-platform deployment and LLM-driven testing of autonomous driving systems. Experiments demonstrate that ScenGE identifies 31.96% more severe collision cases on average across multiple reinforcement learning models. Post-adversarial training, model robustness improves significantly. Real-world vehicle tests and human evaluations validate both the plausibility and hazard severity of generated scenarios.

Technology Category

Application Category

📝 Abstract
The generation of safety-critical scenarios in simulation has become increasingly crucial for safety evaluation in autonomous vehicles prior to road deployment in society. However, current approaches largely rely on predefined threat patterns or rule-based strategies, which limit their ability to expose diverse and unforeseen failure modes. To overcome these, we propose ScenGE, a framework that can generate plentiful safety-critical scenarios by reasoning novel adversarial cases and then amplifying them with complex traffic flows. Given a simple prompt of a benign scene, it first performs Meta-Scenario Generation, where a large language model, grounded in structured driving knowledge, infers an adversarial agent whose behavior poses a threat that is both plausible and deliberately challenging. This meta-scenario is then specified in executable code for precise in-simulator control. Subsequently, Complex Scenario Evolution uses background vehicles to amplify the core threat introduced by Meta-Scenario. It builds an adversarial collaborator graph to identify key agent trajectories for optimization. These perturbations are designed to simultaneously reduce the ego vehicle's maneuvering space and create critical occlusions. Extensive experiments conducted on multiple reinforcement learning based AV models show that ScenGE uncovers more severe collision cases (+31.96%) on average than SoTA baselines. Additionally, our ScenGE can be applied to large model based AV systems and deployed on different simulators; we further observe that adversarial training on our scenarios improves the model robustness. Finally, we validate our framework through real-world vehicle tests and human evaluation, confirming that the generated scenarios are both plausible and critical. We hope our paper can build up a critical step towards building public trust and ensuring their safe deployment.
Problem

Research questions and friction points this paper is trying to address.

Generating diverse safety-critical scenarios for autonomous vehicle testing
Overcoming limitations of predefined threat patterns in simulation
Amplifying adversarial scenarios to expose unforeseen failure modes
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven adversarial scenario generation
Adversarial collaborator graph optimization
Simulator-agnostic safety evaluation framework
🔎 Similar Papers
No similar papers found.
J
Jiangfan Liu
Beihang University
Y
Yongkang Guo
Beihang University
F
Fangzhi Zhong
Beihang University
Tianyuan Zhang
Tianyuan Zhang
MIT
Computer VisionMachine Learning
Zonglei Jing
Zonglei Jing
Beihang University
Machine LearningReinforcement LearningOptimal Control
Siyuan Liang
Siyuan Liang
College of Computing and Data Science, Nanyang Technological University
Trustworthy Foundation Model
Jiakai Wang
Jiakai Wang
Zhongguancun Laboratory
Adversarial examplesTrustworthy AI
M
Mingchuan Zhang
Henan University of Science and Technology
A
Aishan Liu
Beihang University
X
Xianglong Liu
Beihang University, Zhongguancun Laboratory