PurSAMERE: Reliable Adversarial Purification via Sharpness-Aware Minimization of Expected Reconstruction Error

๐Ÿ“… 2026-02-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes a deterministic adversarial example purification framework to address the vulnerability of stochastic purification methods under strong white-box attacks, where inherent randomness can be exploited to degrade robustness. The approach searches within the input neighborhood for a sample that minimizes the expected reconstruction error under minimal noise perturbation and, for the first time, integrates Sharpness-Aware Minimization (SAM) into the purification process to steer solutions toward flatter regions of the error landscape. Theoretical analysis demonstrates that, in the low-noise limit, the method recovers local maxima of the Gaussian-smoothed data density. Empirical evaluations show that the proposed framework significantly outperforms current state-of-the-art defenses against strong deterministic white-box attacks while maintaining high accuracy on clean samples.

Technology Category

Application Category

๐Ÿ“ Abstract
We propose a novel deterministic purification method to improve adversarial robustness by mapping a potentially adversarial sample toward a nearby sample that lies close to a mode of the data distribution, where classifiers are more reliable. We design the method to be deterministic to ensure reliable test accuracy and to prevent the degradation of effective robustness observed in stochastic purification approaches when the adversary has full knowledge of the system and its randomness. We employ a score model trained by minimizing the expected reconstruction error of noise-corrupted data, thereby learning the structural characteristics of the input data distribution. Given a potentially adversarial input, the method searches within its local neighborhood for a purified sample that minimizes the expected reconstruction error under noise corruption and then feeds this purified sample to the classifier. During purification, sharpness-aware minimization is used to guide the purified samples toward flat regions of the expected reconstruction error landscape, thereby enhancing robustness. We further show that, as the noise level decreases, minimizing the expected reconstruction error biases the purified sample toward local maximizers of the Gaussian-smoothed density; under additional local assumptions on the score model, we prove recovery of a local maximizer in the small-noise limit. Experimental results demonstrate significant gains in adversarial robustness over state-of-the-art methods under strong deterministic white-box attacks.
Problem

Research questions and friction points this paper is trying to address.

adversarial robustness
deterministic purification
expected reconstruction error
sharpness-aware minimization
score model
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial purification
sharpness-aware minimization
expected reconstruction error
score-based generative model
deterministic defense
๐Ÿ”Ž Similar Papers
No similar papers found.
V
Vinh Hoang
Department of Mathematics, RWTH-Aachen University, Germany; Institute for a sustainable Hydrogen Economy (IHE), Forschungszentrum Jรผlich, Germany
Sebastian Krumscheid
Sebastian Krumscheid
Karlsruhe Institute of Technology
uncertainty quantificationnumerical analysisstochastic differential equationsmultiscale methodsapplied and computational
Holger Rauhut
Holger Rauhut
Professor for Mathematics, LMU Munich
applied harmonic analysiscompressive sensingdeep learningsignal and image processingrandom
R
Raรบl Tempone
Computer, Electrical and Mathematical Sciences and Engineering, KAUST, Saudi Arabia