Feature Statistics with Uncertainty Help Adversarial Robustness

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural networks (DNNs) suffer severe robustness degradation under adversarial attacks due to systematic shifts in feature statistical distributions. To address this, we propose Feature Statistical Uncertainty (FSU), a general-purpose defense method grounded in uncertainty-aware modeling of channel-wise statistics. FSU theoretically characterizes the targeted adversarial perturbation mechanism—namely, the directional shift in per-channel means and standard deviations—and introduces a plug-and-play module that jointly calibrates their joint distribution via multivariate Gaussian modeling and performs stochastic resampling for robust feature reconstruction. Fully compatible with training, inference, adversarial attack generation, and fine-tuning, FSU incurs negligible computational overhead. Evaluated on CIFAR-10, CIFAR-100, and SVHN, FSU-integrated models achieve 50%–80% robust accuracy against the strong Carlini & Wagner (CW) attack—substantially outperforming existing “collapse-prone” defenses.

Technology Category

Application Category

📝 Abstract
Despite the remarkable success of deep neural networks (DNNs), the security threat of adversarial attacks poses a significant challenge to the reliability of DNNs. By introducing randomness into different parts of DNNs, stochastic methods can enable the model to learn some uncertainty, thereby improving model robustness efficiently. In this paper, we theoretically discover a universal phenomenon that adversarial attacks will shift the distributions of feature statistics. Motivated by this theoretical finding, we propose a robustness enhancement module called Feature Statistics with Uncertainty (FSU). It resamples channel-wise feature means and standard deviations of examples from multivariate Gaussian distributions, which helps to reconstruct the attacked examples and calibrate the shifted distributions. The calibration recovers some domain characteristics of the data for classification, thereby mitigating the influence of perturbations and weakening the ability of attacks to deceive models. The proposed FSU module has universal applicability in training, attacking, predicting and fine-tuning, demonstrating impressive robustness enhancement ability at trivial additional time cost. For example, against powerful optimization-based CW attacks, by incorporating FSU into attacking and predicting phases, it endows many collapsed state-of-the-art models with 50%-80% robust accuracy on CIFAR10, CIFAR100 and SVHN.
Problem

Research questions and friction points this paper is trying to address.

Adversarial attacks threaten DNN reliability by shifting feature statistics distributions
Proposing Feature Statistics with Uncertainty (FSU) to reconstruct attacked examples and calibrate distributions
FSU enhances robustness universally with minimal additional computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces randomness to learn uncertainty
Proposes Feature Statistics with Uncertainty module
Resamples feature statistics from Gaussian distributions
🔎 Similar Papers
No similar papers found.
R
Ran Wang
School of Mathematical Sciences, Shenzhen University, Shenzhen, China
Xinlei Zhou
Xinlei Zhou
School of Mathematical Sciences, Shenzhen University, Shenzhen, China
R
Rihao Li
School of Mathematical Sciences, Shenzhen University, Shenzhen, China
Meng Hu
Meng Hu
FDA
Data AnalyticsBig DataMachine LearningStrategic PlanningIn-vitro Bioequivalence Method
Wenhui Wu
Wenhui Wu
Shenzhen University
Machine Learning
Y
Yuheng Jia
School of Computer Science and Engineering, Southeast University, Nanjing, China