Towards Powerful and Practical Patch Attacks for 2D Object Detection in Autonomous Driving

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the overestimation of vulnerability of learning-based 2D object detectors for autonomous driving to adversarial patch attacks under high-resolution scenarios—a flaw stemming from existing attack methods’ reliance on low-resolution training and unrealistic evaluation metrics (e.g., mAP). To this end, we propose P³A, the first efficient black-box adversarial patch attack framework tailored for high-resolution inputs. Our contributions are threefold: (1) we introduce Practical Attack Success Rate (PASR), a deployment-relevant metric reflecting real-world detection failure; (2) we design a Localization-aware Confidence Suppression Loss (LCSL) to precisely degrade confidence scores of targeted detections; and (3) we propose Probabilistic Scale-Preserving Padding (PSPP) to enhance patch generalizability and cross-model transferability in high-resolution imagery. Extensive experiments demonstrate that P³A significantly outperforms state-of-the-art methods on unseen models and high-resolution benchmarks, validating its effectiveness and robust transferability in realistic autonomous driving settings.

Technology Category

Application Category

📝 Abstract
Learning-based autonomous driving systems remain critically vulnerable to adversarial patches, posing serious safety and security risks in their real-world deployment. Black-box attacks, notable for their high attack success rate without model knowledge, are especially concerning, with their transferability extensively studied to reduce computational costs compared to query-based attacks. Previous transferability-based black-box attacks typically adopt mean Average Precision (mAP) as the evaluation metric and design training loss accordingly. However, due to the presence of multiple detected bounding boxes and the relatively lenient Intersection over Union (IoU) thresholds, the attack effectiveness of these approaches is often overestimated, resulting in reduced success rates in practical attacking scenarios. Furthermore, patches trained on low-resolution data often fail to maintain effectiveness on high-resolution images, limiting their transferability to autonomous driving datasets. To fill this gap, we propose P$^3$A, a Powerful and Practical Patch Attack framework for 2D object detection in autonomous driving, specifically optimized for high-resolution datasets. First, we introduce a novel metric, Practical Attack Success Rate (PASR), to more accurately quantify attack effectiveness with greater relevance for pedestrian safety. Second, we present a tailored Localization-Confidence Suppression Loss (LCSL) to improve attack transferability under PASR. Finally, to maintain the transferability for high-resolution datasets, we further incorporate the Probabilistic Scale-Preserving Padding (PSPP) into the patch attack pipeline as a data preprocessing step. Extensive experiments show that P$^3$A outperforms state-of-the-art attacks on unseen models and unseen high-resolution datasets, both under the proposed practical IoU-based evaluation metric and the previous mAP-based metrics.
Problem

Research questions and friction points this paper is trying to address.

Address vulnerability of autonomous driving systems to adversarial patches
Improve attack transferability for high-resolution autonomous driving datasets
Propose accurate evaluation metric for practical attack success rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes PASR metric for accurate attack evaluation
Introduces LCSL loss to enhance attack transferability
Uses PSPP preprocessing for high-resolution adaptability
🔎 Similar Papers
No similar papers found.