PBCAT: Patch-based composite adversarial training against physically realizable attacks on object detection

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Physical adversarial attacks—such as adversarial patches and textures—pose significant threats to object detectors, yet existing robust training methods lack a unified framework capable of defending against diverse, real-world physical attack modalities. Method: This paper proposes a patch-based composite adversarial training framework that jointly optimizes gradient-guided local adversarial patches and globally imperceptible perturbations, integrated with multi-scale data augmentation and end-to-end joint training. Contribution/Results: To the best of our knowledge, this is the first approach enabling unified modeling and defense against multiple physical attack types. It exhibits strong generalization, effectively mitigating both seen and unseen attacks. Under state-of-the-art adversarial texture attacks, it improves detection accuracy by 29.7% over prior methods, substantially enhancing detector security and robustness in practical deployment scenarios.

Technology Category

Application Category

📝 Abstract
Object detection plays a crucial role in many security-sensitive applications. However, several recent studies have shown that object detectors can be easily fooled by physically realizable attacks, eg, adversarial patches and recent adversarial textures, which pose realistic and urgent threats. Adversarial Training (AT) has been recognized as the most effective defense against adversarial attacks. While AT has been extensively studied in the $l_infty$ attack settings on classification models, AT against physically realizable attacks on object detectors has received limited exploration. Early attempts are only performed to defend against adversarial patches, leaving AT against a wider range of physically realizable attacks under-explored. In this work, we consider defending against various physically realizable attacks with a unified AT method. We propose PBCAT, a novel Patch-Based Composite Adversarial Training strategy. PBCAT optimizes the model by incorporating the combination of small-area gradient-guided adversarial patches and imperceptible global adversarial perturbations covering the entire image. With these designs, PBCAT has the potential to defend against not only adversarial patches but also unseen physically realizable attacks such as adversarial textures. Extensive experiments in multiple settings demonstrated that PBCAT significantly improved robustness against various physically realizable attacks over state-of-the-art defense methods. Notably, it improved the detection accuracy by 29.7% over previous defense methods under one recent adversarial texture attack.
Problem

Research questions and friction points this paper is trying to address.

Defends object detectors against physically realizable attacks
Unifies adversarial training for patches and global perturbations
Improves robustness against unseen adversarial textures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified adversarial training for diverse physical attacks
Combines gradient-guided patches and global perturbations
Enhances robustness against unseen adversarial textures
🔎 Similar Papers
No similar papers found.
X
Xiao Li
Department of Computer Science and Technology, BNRist, Institute for Artificial Intelligence, Tsinghua Laboratory of Brain and Intelligence, Tsinghua University
Yiming Zhu
Yiming Zhu
Phd of AI
Social computingInternet MeasurementsData Science
Y
Yifan Huang
Department of Computer Science and Technology, BNRist, Institute for Artificial Intelligence, Tsinghua Laboratory of Brain and Intelligence, Tsinghua University
W
Wei Zhang
Department of Computer Science and Technology, BNRist, Institute for Artificial Intelligence, Tsinghua Laboratory of Brain and Intelligence, Tsinghua University
Y
Yingzhe He
Huawei Technologies
J
Jie Shi
Huawei Technologies
X
Xiaolin Hu
Department of Computer Science and Technology, BNRist, Institute for Artificial Intelligence, Tsinghua Laboratory of Brain and Intelligence, Tsinghua University; Chinese Institute for Brain Research (CIBR)