π€ AI Summary
This work addresses a critical flaw in current adversarial robustness evaluation methods, which substantially overestimate model robustness when defenses employ βdummy classesβ as security traps. The study is the first to expose the vulnerability of such defenses under conventional evaluation protocols and introduces a novel weighted adversarial attack framework. Building upon AutoAttack, the proposed method incorporates a dual-objective loss function that simultaneously targets both genuine and dummy classes, complemented by a dynamic weighting mechanism that adaptively optimizes attack efficacy. Experimental results on CIFAR-10 under ββ perturbations with Ξ΅=8/255 demonstrate that the approach drastically reduces the reported robust accuracy of a prominent dummy-class defense from 58.61% to 29.52%, thereby revealing its true susceptibility and advancing the fidelity of robustness evaluation methodologies.
π Abstract
Adversarial robustness evaluation faces a critical challenge as new defense paradigms emerge that can exploit limitations in existing assessment methods. This paper reveals that Dummy Classes-based defenses, which introduce an additional "dummy" class as a safety sink for adversarial examples, achieve significantly overestimated robustness under conventional evaluation strategies like AutoAttack. The fundamental limitation stems from these attacks' singular focus on misleading the true class label, which aligns perfectly with the defense mechanism--successful attacks are simply captured by the dummy class. To address this gap, we propose Dummy-Aware Weighted Attack (DAWA), a novel evaluation method that simultaneously targets both the true label and dummy label with adaptive weighting during adversarial example synthesis. Extensive experiments demonstrate that DAWA effectively breaks this defense paradigm, reducing the measured robustness of a leading Dummy Classes-based defense from 58.61% to 29.52% on CIFAR-10 under l_infty perturbation (epsilon=8/255). Our work provides a more reliable benchmark for evaluating this emerging class of defenses and highlights the need for continuous evolution of robustness assessment methodologies.