Solving adversarial examples requires solving exponential misalignment

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies the root cause of adversarial examples as an exponential dimensional mismatch between neural networks and human perception in conceptual representation. By formalizing the notion of a Perceptual Manifold (PM) for class concepts, the study reveals that neural networks exhibit significantly higher PM dimensions than natural human concepts, leading to numerous inputs misclassified by machines yet unambiguous to humans. The paper establishes, for the first time, that adversarial vulnerability stems from this PM dimensional misalignment and posits dimension alignment as a necessary condition for robustness. Through perceptual manifold modeling, dimension estimation, and robustness evaluation across 18 models with varying robustness levels, the authors demonstrate a strong negative correlation between PM dimensionality and both robust accuracy and distance to the PM, indicating that perceptual alignment is achievable only when the PM dimension approaches human-level compactness.

Technology Category

Application Category

📝 Abstract
Adversarial attacks - input perturbations imperceptible to humans that fool neural networks - remain both a persistent failure mode in machine learning, and a phenomenon with mysterious origins. To shed light, we define and analyze a network's perceptual manifold (PM) for a class concept as the space of all inputs confidently assigned to that class by the network. We find, strikingly, that the dimensionalities of neural network PMs are orders of magnitude higher than those of natural human concepts. Since volume typically grows exponentially with dimension, this suggests exponential misalignment between machines and humans, with exponentially many inputs confidently assigned to concepts by machines but not humans. Furthermore, this provides a natural geometric hypothesis for the origin of adversarial examples: because a network's PM fills such a large region of input space, any input will be very close to any class concept's PM. Our hypothesis thus suggests that adversarial robustness cannot be attained without dimensional alignment of machine and human PMs, and therefore makes strong predictions: both robust accuracy and distance to any PM should be negatively correlated with the PM dimension. We confirmed these predictions across 18 different networks of varying robust accuracy. Crucially, we find even the most robust networks are still exponentially misaligned, and only the few PMs whose dimensionality approaches that of human concepts exhibit alignment to human perception. Our results connect the fields of alignment and adversarial examples, and suggest the curse of high dimensionality of machine PMs is a major impediment to adversarial robustness.
Problem

Research questions and friction points this paper is trying to address.

adversarial examples
perceptual manifold
exponential misalignment
dimensionality
adversarial robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

perceptual manifold
adversarial examples
dimensionality alignment
exponential misalignment
adversarial robustness
🔎 Similar Papers
No similar papers found.