ADV-0: Closed-Loop Min-Max Adversarial Training for Long-Tail Robustness in Autonomous Driving

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient robustness of autonomous driving systems in rare yet safety-critical long-tail scenarios by proposing the first closed-loop minimax optimization framework that formulates adversarial training as a zero-sum Markov game. The approach jointly optimizes driving policies and adversarial scenarios through iteratively evolved adversaries driven by preference learning, with theoretical guarantees of convergence to a Nash equilibrium and maximization of a certified lower bound on real-world performance. Experimental results demonstrate that the framework effectively uncovers diverse safety-critical failure modes and significantly enhances the generalization and robustness of both policy and motion planners under previously unseen long-tail risks.

Technology Category

Application Category

📝 Abstract
Deploying autonomous driving systems requires robustness against long-tail scenarios that are rare but safety-critical. While adversarial training offers a promising solution, existing methods typically decouple scenario generation from policy optimization and rely on heuristic surrogates. This leads to objective misalignment and fails to capture the shifting failure modes of evolving policies. This paper presents ADV-0, a closed-loop min-max optimization framework that treats the interaction between driving policy (defender) and adversarial agent (attacker) as a zero-sum Markov game. By aligning the attacker's utility directly with the defender's objective, we reveal the optimal adversary distribution. To make this tractable, we cast dynamic adversary evolution as iterative preference learning, efficiently approximating this optimum and offering an algorithm-agnostic solution to the game. Theoretically, ADV-0 converges to a Nash Equilibrium and maximizes a certified lower bound on real-world performance. Experiments indicate that it effectively exposes diverse safety-critical failures and greatly enhances the generalizability of both learned policies and motion planners against unseen long-tail risks.
Problem

Research questions and friction points this paper is trying to address.

long-tail robustness
autonomous driving
adversarial training
safety-critical scenarios
policy optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

closed-loop adversarial training
min-max optimization
zero-sum Markov game
long-tail robustness
iterative preference learning
T
Tong Nie
The Hong Kong Polytechnic University, Hong Kong SAR, China
Yihong Tang
Yihong Tang
McGill University
Look for mein the momentsbetween thoughts 🦋
Junlin He
Junlin He
The Hong Kong Polytechnic University
urban science
Y
Yuewen Mei
Tongji University, Shanghai, China
Jie Sun
Jie Sun
University of Science and Technology of China
L
Lijun Sun
McGill University, Montreal, QC, Canada
Wei Ma
Wei Ma
The Hong Kong Polytechnic University
Intelligent Transportation SystemsUrban ComputingMachine LearningData MiningNetwork Modeling
J
Jian Sun
Tongji University, Shanghai, China