π€ AI Summary
This work investigates the extent to which adversarial attacks reflect a modelβs actual robustness under random noise of comparable magnitude, rather than merely characterizing worst-case scenarios. To this end, the authors propose a directional bias perturbation framework governed by a concentration parameter ΞΊ, which interpolates smoothly between isotropic noise and adversarial directions. They further introduce a novel attack strategy designed to better approximate realistic statistical noise. Through systematic evaluations on ImageNet and CIFAR-10, the study delineates the conditions under which common adversarial attacks effectively capture noise-induced failure risks, thereby offering both theoretical grounding and practical guidance for safety-oriented robustness evaluation of machine learning models.
π Abstract
Adversarial attacks are widely used to identify model vulnerabilities; however, their validity as proxies for robustness to random perturbations remains debated. We ask whether an adversarial example provides a representative estimate of misprediction risk under stochastic perturbations of the same magnitude, or instead reflects an atypical worst-case event. To address this question, we introduce a probabilistic analysis that quantifies this risk with respect to directionally biased perturbation distributions, parameterized by a concentration factor $\kappa$ that interpolates between isotropic noise and adversarial directions. Building on this, we study the limits of this connection by proposing an attack strategy designed to probe vulnerabilities in regimes that are statistically closer to uniform noise. Experiments on ImageNet and CIFAR-10 systematically benchmark multiple attacks, revealing when adversarial success meaningfully reflects robustness to perturbations and when it does not, thereby informing their use in safety-oriented robustness evaluation.