A Single Set of Adversarial Clothes Breaks Multiple Defense Methods in the Physical World

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the effectiveness of mainstream adversarial defenses against large-area, natural-looking adversarial clothing attacks in the physical world. To target object detectors, we propose a cross-domain optimized adversarial clothing generation method that jointly leverages digital-domain gradient-based optimization and physical-domain wearability constraints, enabling high-transferability attacks against multiple defense models using a single garment pattern. Experiments demonstrate that the generated clothing achieves a 96.06% attack success rate against undefended detectors and maintains an average attack success rate of over 64.84% across nine representative defensive models in real-world settings. This work is the first to reveal systemic vulnerabilities of current defenses against large-coverage, highly natural wearable adversarial perturbations. It provides critical empirical evidence and concrete directions for improving the robustness of vision systems in practical deployment scenarios.

Technology Category

Application Category

📝 Abstract
In recent years, adversarial attacks against deep learning-based object detectors in the physical world have attracted much attention. To defend against these attacks, researchers have proposed various defense methods against adversarial patches, a typical form of physically-realizable attack. However, our experiments showed that simply enlarging the patch size could make these defense methods fail. Motivated by this, we evaluated various defense methods against adversarial clothes which have large coverage over the human body. Adversarial clothes provide a good test case for adversarial defense against patch-based attacks because they not only have large sizes but also look more natural than a large patch on humans. Experiments show that all the defense methods had poor performance against adversarial clothes in both the digital world and the physical world. In addition, we crafted a single set of clothes that broke multiple defense methods on Faster R-CNN. The set achieved an Attack Success Rate (ASR) of 96.06% against the undefended detector and over 64.84% ASRs against nine defended models in the physical world, unveiling the common vulnerability of existing adversarial defense methods against adversarial clothes. Code is available at: https://github.com/weiz0823/adv-clothes-break-multiple-defenses.
Problem

Research questions and friction points this paper is trying to address.

Evaluating defense methods against adversarial clothing attacks on object detectors
Creating adversarial clothes that bypass multiple existing defense mechanisms
Revealing common vulnerabilities in adversarial patch defense systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial clothes break multiple defense methods
Large patch size causes existing defenses to fail
Single clothing set achieves high attack success rate
🔎 Similar Papers
No similar papers found.
W
Wei Zhang
Department of Computer Science and Technology, Institute for Artificial Intelligence, THBI, BNRist, Tsinghua University, Beijing 100084, China
Zhanhao Hu
Zhanhao Hu
University of California, Berkeley
large language modelsadversarial examplesprivacysecurity
X
Xiao Li
Department of Computer Science and Technology, Institute for Artificial Intelligence, THBI, BNRist, Tsinghua University, Beijing 100084, China
X
Xiaopei Zhu
Department of Computer Science and Technology, Institute for Artificial Intelligence, THBI, BNRist, Tsinghua University, Beijing 100084, China
X
Xiaolin Hu
Department of Computer Science and Technology, Institute for Artificial Intelligence, THBI, BNRist, Tsinghua University, Beijing 100084, China; Chinese Institute for Brain Research (CIBR), Beijing 100010, China