Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis

📅 2024-06-14
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Prior studies report contradictory findings on how over-parameterization affects neural network adversarial robustness, partly due to inconsistent attack evaluation protocols. Method: We propose a unified empirical framework that jointly assesses both the reliability of mainstream adversarial attacks (e.g., PGD, FGSM) and model robustness under controlled experimental conditions, incorporating attack effectiveness diagnostics and rigorous ablation via controlled variables. Contribution/Results: Our analysis reveals—empirically for the first time—that prior conclusions attributing reduced robustness to over-parameterization are partially confounded by unreliable attacks. When validated, effective attacks are employed, over-parameterized networks consistently exhibit significantly enhanced adversarial robustness, with statistically significant and reproducible gains across diverse settings. This work resolves a key conceptual controversy and establishes a robust empirical foundation confirming over-parameterization as a genuine robustness-enhancing factor.

Technology Category

Application Category

📝 Abstract
Thanks to their extensive capacity, over-parameterized neural networks exhibit superior predictive capabilities and generalization. However, having a large parameter space is considered one of the main suspects of the neural networks' vulnerability to adversarial example -- input samples crafted ad-hoc to induce a desired misclassification. Relevant literature has claimed contradictory remarks in support of and against the robustness of over-parameterized networks. These contradictory findings might be due to the failure of the attack employed to evaluate the networks' robustness. Previous research has demonstrated that depending on the considered model, the algorithm employed to generate adversarial examples may not function properly, leading to overestimating the model's robustness. In this work, we empirically study the robustness of over-parameterized networks against adversarial examples. However, unlike the previous works, we also evaluate the considered attack's reliability to support the results' veracity. Our results show that over-parameterized networks are robust against adversarial attacks as opposed to their under-parameterized counterparts.
Problem

Research questions and friction points this paper is trying to address.

Examining adversarial robustness in over-parameterized neural networks
Assessing reliability of attack methods for robustness evaluation
Comparing robustness between over- and under-parameterized network models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating attack reliability for robustness assessment
Empirical analysis of over-parameterized networks' adversarial robustness
Comparing robustness between over- and under-parameterized networks
🔎 Similar Papers
No similar papers found.