Unveiling and Mitigating Adversarial Vulnerabilities in Iterative Optimizers

📅 2025-04-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work first systematically reveals the adversarial vulnerability of non-learning iterative optimizers—such as proximal gradient methods—demonstrating that minute input perturbations can significantly distort the objective function landscape, thereby shifting convergence points. To model and enhance robustness, the authors unroll a truncated optimization process into a trainable deep network and theoretically prove its architectural compatibility with adversarial training. Experiments confirm the vulnerability of classical optimizers across image reconstruction tasks; after applying the proposed adversarial training, optimization path stability improves markedly, reducing solution deviation by over 60% on average. Key contributions are: (1) establishing the intrinsic adversarial fragility of conventional iterative optimizers; (2) proposing a robustification framework based on deep unrolling, backed by theoretical guarantees; and (3) providing the first systematic empirical analysis framework for adversarial robustness of non-learning optimizers.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) models are often sensitive to carefully crafted yet seemingly unnoticeable perturbations. Such adversarial examples are considered to be a property of ML models, often associated with their black-box operation and sensitivity to features learned from data. This work examines the adversarial sensitivity of non-learned decision rules, and particularly of iterative optimizers. Our analysis is inspired by the recent developments in deep unfolding, which cast such optimizers as ML models. We show that non-learned iterative optimizers share the sensitivity to adversarial examples of ML models, and that attacking iterative optimizers effectively alters the optimization objective surface in a manner that modifies the minima sought. We then leverage the ability to cast iteration-limited optimizers as ML models to enhance robustness via adversarial training. For a class of proximal gradient optimizers, we rigorously prove how their learning affects adversarial sensitivity. We numerically back our findings, showing the vulnerability of various optimizers, as well as the robustness induced by unfolding and adversarial training.
Problem

Research questions and friction points this paper is trying to address.

Examining adversarial vulnerabilities in non-learned iterative optimizers
Showing adversarial sensitivity alters optimization objective surface
Enhancing robustness via adversarial training for iterative optimizers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Examining adversarial sensitivity in non-learned optimizers
Using deep unfolding to cast optimizers as ML models
Enhancing robustness via adversarial training for optimizers
🔎 Similar Papers
No similar papers found.
E
Elad Sofer
School of ECE, Ben-Gurion University of the Negev, Be’er-Sheva, Israel
T
Tomer Shaked
School of ECE, Ben-Gurion University of the Negev, Be’er-Sheva, Israel
Caroline Chaux
Caroline Chaux
Aix-Marseille Univ., I2M UMR CNRS 7373
Nir Shlezinger
Nir Shlezinger
Ben-Gurion University of the Negev
Signal processingmachine learningcommunicationsinformation theory