🤖 AI Summary
Existing adversarial example research predominantly focuses on constrained perturbations, failing to reflect real-world failure modes and lacking principled modeling of naturally occurring adversarial examples. Method: We propose a novel natural adversarial example generation framework based on Denoising Diffusion Probabilistic Models (DDPMs), the first to integrate time-travel sampling with enhanced classifier guidance to steer denoising trajectories precisely toward the intersection manifold between true-class and adversarial-class data distributions. Contribution/Results: By incorporating adversarial-class conditional guidance and explicit manifold-intersection constraints, our method significantly improves semantic fidelity and cross-architecture transferability. Experiments demonstrate state-of-the-art attack success rates, lower Fréchet Inception Distance (FID) scores—indicating closer alignment with real-world misclassification distributions—and effectively bridge the gap between synthetically generated adversarial examples and empirically observed failure patterns in practical deployments.
📝 Abstract
Adversarial samples exploit irregularities in the manifold ``learned'' by deep learning models to cause misclassifications. The study of these adversarial samples provides insight into the features a model uses to classify inputs, which can be leveraged to improve robustness against future attacks. However, much of the existing literature focuses on constrained adversarial samples, which do not accurately reflect test-time errors encountered in real-world settings. To address this, we propose `NatADiff', an adversarial sampling scheme that leverages denoising diffusion to generate natural adversarial samples. Our approach is based on the observation that natural adversarial samples frequently contain structural elements from the adversarial class. Deep learning models can exploit these structural elements to shortcut the classification process, rather than learning to genuinely distinguish between classes. To leverage this behavior, we guide the diffusion trajectory towards the intersection of the true and adversarial classes, combining time-travel sampling with augmented classifier guidance to enhance attack transferability while preserving image fidelity. Our method achieves comparable attack success rates to current state-of-the-art techniques, while exhibiting significantly higher transferability across model architectures and better alignment with natural test-time errors as measured by FID. These results demonstrate that NatADiff produces adversarial samples that not only transfer more effectively across models, but more faithfully resemble naturally occurring test-time errors.