🤖 AI Summary
Deep neural networks for vision tasks are vulnerable to adversarial attacks. Existing adversarial training methods rely on inverse adversarial attacks to generate high-confidence samples but overlook an implicit background feature bias—namely, models over-rely on spurious background cues that lack causal relevance to the target label, thereby degrading both robustness and generalization. This work is the first to identify and formalize this bias mechanism. We propose a debiased high-confidence adversarial training framework: (1) logit alignment constraints to enhance class discriminability, and (2) foreground-logit orthogonalization to explicitly decouple background interference. Our method is plug-and-play and requires no architectural modifications. Evaluated on CIFAR-10/100 and ImageNet, it achieves state-of-the-art robust accuracy under standard adversarial benchmarks while significantly improving cross-dataset generalization.
📝 Abstract
Despite the significant advances that deep neural networks (DNNs) have achieved in various visual tasks, they still exhibit vulnerability to adversarial examples, leading to serious security concerns. Recent adversarial training techniques have utilized inverse adversarial attacks to generate high-confidence examples, aiming to align the distributions of adversarial examples with the high-confidence regions of their corresponding classes. However, in this paper, our investigation reveals that high-confidence outputs under inverse adversarial attacks are correlated with biased feature activation. Specifically, training with inverse adversarial examples causes the model's attention to shift towards background features, introducing a spurious correlation bias. To address this bias, we propose Debiased High-Confidence Adversarial Training (DHAT), a novel approach that not only aligns the logits of adversarial examples with debiased high-confidence logits obtained from inverse adversarial examples, but also restores the model's attention to its normal state by enhancing foreground logit orthogonality. Extensive experiments demonstrate that DHAT achieves state-of-the-art performance and exhibits robust generalization capabilities across various vision datasets. Additionally, DHAT can seamlessly integrate with existing advanced adversarial training techniques for improving the performance.