A Survey and Evaluation of Adversarial Attacks for Object Detection

📅 2024-08-04
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Object detection models exhibit significant vulnerability to adversarial attacks, yet robustness evaluation lacks standardization and comprehensive theoretical frameworks. Method: We propose the first adversarial attack taxonomy specifically for object detection, systematically unifying and extending robustness evaluation metrics. Through empirical analysis of 12 representative attacks—including FGSM, PGD, and CIoU-loss—across diverse detectors (e.g., Faster R-CNN, YOLOv5, and vision-language pretrained models like Grounding DINO), we characterize cross-architecture attack failure patterns and critical vulnerability modes. Contribution/Results: We identify severe inconsistencies in current evaluation practices and formally establish the necessity of a unified benchmarking protocol. Key research gaps are pinpointed, including multi-scale perturbation modeling and joint optimization of bounding-box confidence scores and IoU. Our framework provides both theoretical foundations and practical guidelines for developing robust detection systems.

Technology Category

Application Category

📝 Abstract
Deep learning models achieve remarkable accuracy in computer vision tasks, yet remain vulnerable to adversarial examples--carefully crafted perturbations to input images that can deceive these models into making confident but incorrect predictions. This vulnerability pose significant risks in high-stakes applications such as autonomous vehicles, security surveillance, and safety-critical inspection systems. While the existing literature extensively covers adversarial attacks in image classification, comprehensive analyses of such attacks on object detection systems remain limited. This paper presents a novel taxonomic framework for categorizing adversarial attacks specific to object detection architectures, synthesizes existing robustness metrics, and provides a comprehensive empirical evaluation of state-of-the-art attack methodologies on popular object detection models, including both traditional detectors and modern detectors with vision-language pretraining. Through rigorous analysis of open-source attack implementations and their effectiveness across diverse detection architectures, we derive key insights into attack characteristics. Furthermore, we delineate critical research gaps and emerging challenges to guide future investigations in securing object detection systems against adversarial threats. Our findings establish a foundation for developing more robust detection models while highlighting the urgent need for standardized evaluation protocols in this rapidly evolving domain.
Problem

Research questions and friction points this paper is trying to address.

Analyzes adversarial attacks on object detection systems
Evaluates attack effectiveness across diverse detection architectures
Identifies research gaps for robust object detection models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel taxonomic framework for adversarial attacks
Comprehensive empirical evaluation of attack methodologies
Standardized evaluation protocols for robustness
K
Khoi Nguyen Tiet Nguyen
Institute for Infocomm Research, A*STAR, Singapore
W
Wenyu Zhang
Institute for Infocomm Research, A*STAR, Singapore
K
Kangkang Lu
Institute for Infocomm Research, A*STAR, Singapore
Y
Yuhuan Wu
Institute of High Performance Computing, A*STAR, Singapore
X
Xingjian Zheng
Institute of High Performance Computing, A*STAR, Singapore
H
Hui Li Tan
Institute for Infocomm Research, A*STAR, Singapore
Liangli Zhen
Liangli Zhen
A*STAR, Singapore
Machine LearningAI SafetyMulti-Objective Optimisation