๐ค AI Summary
This work proposes a deterministic adversarial example purification framework to address the vulnerability of stochastic purification methods under strong white-box attacks, where inherent randomness can be exploited to degrade robustness. The approach searches within the input neighborhood for a sample that minimizes the expected reconstruction error under minimal noise perturbation and, for the first time, integrates Sharpness-Aware Minimization (SAM) into the purification process to steer solutions toward flatter regions of the error landscape. Theoretical analysis demonstrates that, in the low-noise limit, the method recovers local maxima of the Gaussian-smoothed data density. Empirical evaluations show that the proposed framework significantly outperforms current state-of-the-art defenses against strong deterministic white-box attacks while maintaining high accuracy on clean samples.
๐ Abstract
We propose a novel deterministic purification method to improve adversarial robustness by mapping a potentially adversarial sample toward a nearby sample that lies close to a mode of the data distribution, where classifiers are more reliable. We design the method to be deterministic to ensure reliable test accuracy and to prevent the degradation of effective robustness observed in stochastic purification approaches when the adversary has full knowledge of the system and its randomness. We employ a score model trained by minimizing the expected reconstruction error of noise-corrupted data, thereby learning the structural characteristics of the input data distribution. Given a potentially adversarial input, the method searches within its local neighborhood for a purified sample that minimizes the expected reconstruction error under noise corruption and then feeds this purified sample to the classifier. During purification, sharpness-aware minimization is used to guide the purified samples toward flat regions of the expected reconstruction error landscape, thereby enhancing robustness. We further show that, as the noise level decreases, minimizing the expected reconstruction error biases the purified sample toward local maximizers of the Gaussian-smoothed density; under additional local assumptions on the score model, we prove recovery of a local maximizer in the small-noise limit. Experimental results demonstrate significant gains in adversarial robustness over state-of-the-art methods under strong deterministic white-box attacks.