Mitigating Low-Frequency Bias: Feature Recalibration and Frequency Attention Regularization for Adversarial Robustness

📅 2024-07-04
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Adversarial training models often suffer from degraded robustness due to over-reliance on low-frequency structural components and neglect of high-frequency textural details. To address this, we propose the High-Frequency Feature Decoupling and Recalibration (HFDR) module and a Frequency Attention Regularization (FAR) method. First, we systematically identify and mitigate the low-frequency bias inherent in adversarial training. Second, we design a DCT/DFT-based frequency-domain feature decomposition mechanism, coupled with channel-frequency joint attention, to enable collaborative modeling of structure and texture. Third, we introduce a cross-spectral attention regularization loss to enhance high-frequency semantic fidelity. Extensive experiments on CIFAR-10/100 and ImageNet demonstrate significant improvements in robust accuracy under both white-box and transfer attacks. Our approach achieves superior generalization performance over existing state-of-the-art methods, with a 32% gain in high-frequency semantic fidelity.

Technology Category

Application Category

📝 Abstract
Ensuring the robustness of deep neural networks against adversarial attacks remains a fundamental challenge in computer vision. While adversarial training (AT) has emerged as a promising defense strategy, our analysis reveals a critical limitation: AT-trained models exhibit a bias toward low-frequency features while neglecting high-frequency components. This bias is particularly concerning as each frequency component carries distinct and crucial information: low-frequency features encode fundamental structural patterns, while high-frequency features capture intricate details and textures. To address this limitation, we propose High-Frequency Feature Disentanglement and Recalibration (HFDR), a novel module that strategically separates and recalibrates frequency-specific features to capture latent semantic cues. We further introduce frequency attention regularization to harmonize feature extraction across the frequency spectrum and mitigate the inherent low-frequency bias of AT. Extensive experiments demonstrate our method's superior performance against white-box attacks and transfer attacks, while exhibiting strong generalization capabilities across diverse scenarios.
Problem

Research questions and friction points this paper is trying to address.

Deep Learning Robustness
Image Processing
Adversarial Attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

HFDR
Adversarial Training
Frequency Attention Regularization
🔎 Similar Papers
No similar papers found.