Towards Adversarial Robustness via Debiased High-Confidence Logit Alignment

📅 2024-08-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural networks for vision tasks are vulnerable to adversarial attacks. Existing adversarial training methods rely on inverse adversarial attacks to generate high-confidence samples but overlook an implicit background feature bias—namely, models over-rely on spurious background cues that lack causal relevance to the target label, thereby degrading both robustness and generalization. This work is the first to identify and formalize this bias mechanism. We propose a debiased high-confidence adversarial training framework: (1) logit alignment constraints to enhance class discriminability, and (2) foreground-logit orthogonalization to explicitly decouple background interference. Our method is plug-and-play and requires no architectural modifications. Evaluated on CIFAR-10/100 and ImageNet, it achieves state-of-the-art robust accuracy under standard adversarial benchmarks while significantly improving cross-dataset generalization.

Technology Category

Application Category

📝 Abstract
Despite the significant advances that deep neural networks (DNNs) have achieved in various visual tasks, they still exhibit vulnerability to adversarial examples, leading to serious security concerns. Recent adversarial training techniques have utilized inverse adversarial attacks to generate high-confidence examples, aiming to align the distributions of adversarial examples with the high-confidence regions of their corresponding classes. However, in this paper, our investigation reveals that high-confidence outputs under inverse adversarial attacks are correlated with biased feature activation. Specifically, training with inverse adversarial examples causes the model's attention to shift towards background features, introducing a spurious correlation bias. To address this bias, we propose Debiased High-Confidence Adversarial Training (DHAT), a novel approach that not only aligns the logits of adversarial examples with debiased high-confidence logits obtained from inverse adversarial examples, but also restores the model's attention to its normal state by enhancing foreground logit orthogonality. Extensive experiments demonstrate that DHAT achieves state-of-the-art performance and exhibits robust generalization capabilities across various vision datasets. Additionally, DHAT can seamlessly integrate with existing advanced adversarial training techniques for improving the performance.
Problem

Research questions and friction points this paper is trying to address.

Addresses vulnerability of DNNs to adversarial examples
Mitigates biased feature activations in adversarial training
Improves model robustness and generalization via debiased logit alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns adversarial logits with debiased high-confidence logits
Enhances foreground logit orthogonality to restore attention
Mitigates feature bias in inverse adversarial training
🔎 Similar Papers
No similar papers found.