Most Convolutional Networks Suffer from Small Adversarial Perturbations

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of theoretical characterization regarding the existence of adversarial examples in convolutional neural networks (CNNs) and the minimal perturbation scale required to generate them. Focusing on randomly initialized CNNs, the study establishes—for the first time—that adversarial examples necessarily exist within an ℓ²-ball of radius approximately ‖x‖/√d around any input x, and can be efficiently constructed via a single-step gradient descent. The key technical innovation lies in combining Fourier decomposition with random matrix theory to rigorously analyze the singular value structure of convolution operators, thereby deriving a tight upper bound on the required adversarial perturbation. This result not only provides a theoretical foundation for the vulnerability of random CNNs to extremely small ℓ²-norm perturbations but also validates the theoretical efficacy of single-step attack methods.

Technology Category

Application Category

📝 Abstract
The existence of adversarial examples is relatively understood for random fully connected neural networks, but much less so for convolutional neural networks (CNNs). The recent work [Daniely, 2025] establishes that adversarial examples can be found in CNNs, in some non-optimal distance from the input. We extend over this work and prove that adversarial examples in random CNNs with input dimension $d$ can be found already in $\ell_2$-distance of order $\lVert x \rVert /\sqrt{d}$ from the input $x$, which is essentially the nearest possible. We also show that such adversarial small perturbations can be found using a single step of gradient descent. To derive our results we use Fourier decomposition to efficiently bound the singular values of a random linear convolutional operator, which is the main ingredient of a CNN layer. This bound might be of independent interest.
Problem

Research questions and friction points this paper is trying to address.

adversarial examples
convolutional neural networks
small perturbations
robustness
random CNNs
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial examples
convolutional neural networks
Fourier decomposition
singular values
gradient descent
🔎 Similar Papers
No similar papers found.