🤖 AI Summary
This work investigates the existence of adversarial examples in random convolutional neural networks (CNNs). Addressing the limitation of prior theoretical analyses—largely confined to fully connected architectures—we propose a novel analytical framework grounded in the isoperimetric inequality on the special orthogonal group $SO(d)$. For the first time, we systematically integrate differential geometry (Lie group structure), high-dimensional probability, and random matrix theory into the robustness analysis of CNNs. We rigorously prove that any random CNN with light-tailed weight distributions almost surely admits a minimal-norm adversarial perturbation, and we provide a quantitative bound on the misclassification rate under such perturbations. Our result generalizes classical findings from fully connected networks to the more practically relevant convolutional setting, while simultaneously simplifying the proof strategy and broadening applicability. Crucially, it reveals that adversarial vulnerability stems fundamentally from intrinsic geometric constraints imposed by high-dimensional symmetry—offering both universality and computational tractability.
📝 Abstract
We show that adversarial examples exist for various random convolutional networks, and furthermore, that this is a relatively simple consequence of the isoperimetric inequality on the special orthogonal group $mathbb{so}(d)$. This extends and simplifies a recent line of work which shows similar results for random fully connected networks.