🤖 AI Summary
This study systematically evaluates the effectiveness of mainstream adversarial defenses against large-area, natural-looking adversarial clothing attacks in the physical world. To target object detectors, we propose a cross-domain optimized adversarial clothing generation method that jointly leverages digital-domain gradient-based optimization and physical-domain wearability constraints, enabling high-transferability attacks against multiple defense models using a single garment pattern. Experiments demonstrate that the generated clothing achieves a 96.06% attack success rate against undefended detectors and maintains an average attack success rate of over 64.84% across nine representative defensive models in real-world settings. This work is the first to reveal systemic vulnerabilities of current defenses against large-coverage, highly natural wearable adversarial perturbations. It provides critical empirical evidence and concrete directions for improving the robustness of vision systems in practical deployment scenarios.
📝 Abstract
In recent years, adversarial attacks against deep learning-based object detectors in the physical world have attracted much attention. To defend against these attacks, researchers have proposed various defense methods against adversarial patches, a typical form of physically-realizable attack. However, our experiments showed that simply enlarging the patch size could make these defense methods fail. Motivated by this, we evaluated various defense methods against adversarial clothes which have large coverage over the human body. Adversarial clothes provide a good test case for adversarial defense against patch-based attacks because they not only have large sizes but also look more natural than a large patch on humans. Experiments show that all the defense methods had poor performance against adversarial clothes in both the digital world and the physical world. In addition, we crafted a single set of clothes that broke multiple defense methods on Faster R-CNN. The set achieved an Attack Success Rate (ASR) of 96.06% against the undefended detector and over 64.84% ASRs against nine defended models in the physical world, unveiling the common vulnerability of existing adversarial defense methods against adversarial clothes. Code is available at: https://github.com/weiz0823/adv-clothes-break-multiple-defenses.