Revisiting Adversarial Perception Attacks and Defense Methods on Autonomous Driving Systems

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning perception models in autonomous driving systems (ADS) face practical risks from adversarial attacks, particularly in safety-critical tasks such as traffic sign recognition and lead-vehicle detection/distance estimation. Method: Leveraging the OpenPilot production-grade platform and YOLO-based detectors, this work conducts the first unified robustness evaluation of multiple defense strategies—including adversarial training, image preprocessing, contrastive learning, and diffusion-based denoising—under physically realizable adversarial perturbations on an L2-level vehicle. Contribution/Results: The study quantitatively characterizes the effectiveness boundaries and limitations of each defense in realistic driving scenarios, establishes a reproducible benchmark for physical-world adversarial robustness assessment, and delivers empirically grounded, actionable guidelines for designing robust perception modules in ADS. Findings reveal significant performance gaps between lab-simulated and real-world adversarial settings, underscoring the necessity of physics-aware evaluation and defense co-design.

Technology Category

Application Category

📝 Abstract
Autonomous driving systems (ADS) increasingly rely on deep learning-based perception models, which remain vulnerable to adversarial attacks. In this paper, we revisit adversarial attacks and defense methods, focusing on road sign recognition and lead object detection and prediction (e.g., relative distance). Using a Level-2 production ADS, OpenPilot by Comma.ai, and the widely adopted YOLO model, we systematically examine the impact of adversarial perturbations and assess defense techniques, including adversarial training, image processing, contrastive learning, and diffusion models. Our experiments highlight both the strengths and limitations of these methods in mitigating complex attacks. Through targeted evaluations of model robustness, we aim to provide deeper insights into the vulnerabilities of ADS perception systems and contribute guidance for developing more resilient defense strategies.
Problem

Research questions and friction points this paper is trying to address.

Assessing adversarial attack impacts on autonomous driving perception models
Evaluating defense methods for road sign and object detection vulnerabilities
Enhancing robustness of deep learning-based ADS against complex attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Examining adversarial attacks on autonomous driving perception
Assessing defense methods like adversarial training
Evaluating model robustness for resilient strategies
🔎 Similar Papers
No similar papers found.
C
Cheng Chen
Louisiana State University, Baton Rouge, LA 70803
Y
Yuhong Wang
Louisiana State University, Baton Rouge, LA 70803
N
Nafis S Munir
Louisiana State University, Baton Rouge, LA 70803
Xiangwei Zhou
Xiangwei Zhou
Associate Professor of Electrical and Computer Engineering, Louisiana State University
Wireless CommunicationsSignal ProcessingFederated LearningIoT & AI
Xugui Zhou
Xugui Zhou
Assistant Professor, Louisiana State University
DependabilityCyber-Physical SystemsMLFormal MethodsControl