🤖 AI Summary
Deep learning perception models in autonomous driving systems (ADS) face practical risks from adversarial attacks, particularly in safety-critical tasks such as traffic sign recognition and lead-vehicle detection/distance estimation.
Method: Leveraging the OpenPilot production-grade platform and YOLO-based detectors, this work conducts the first unified robustness evaluation of multiple defense strategies—including adversarial training, image preprocessing, contrastive learning, and diffusion-based denoising—under physically realizable adversarial perturbations on an L2-level vehicle.
Contribution/Results: The study quantitatively characterizes the effectiveness boundaries and limitations of each defense in realistic driving scenarios, establishes a reproducible benchmark for physical-world adversarial robustness assessment, and delivers empirically grounded, actionable guidelines for designing robust perception modules in ADS. Findings reveal significant performance gaps between lab-simulated and real-world adversarial settings, underscoring the necessity of physics-aware evaluation and defense co-design.
📝 Abstract
Autonomous driving systems (ADS) increasingly rely on deep learning-based perception models, which remain vulnerable to adversarial attacks. In this paper, we revisit adversarial attacks and defense methods, focusing on road sign recognition and lead object detection and prediction (e.g., relative distance). Using a Level-2 production ADS, OpenPilot by Comma.ai, and the widely adopted YOLO model, we systematically examine the impact of adversarial perturbations and assess defense techniques, including adversarial training, image processing, contrastive learning, and diffusion models. Our experiments highlight both the strengths and limitations of these methods in mitigating complex attacks. Through targeted evaluations of model robustness, we aim to provide deeper insights into the vulnerabilities of ADS perception systems and contribute guidance for developing more resilient defense strategies.