🤖 AI Summary
This work first systematically reveals the adversarial vulnerability of non-learning iterative optimizers—such as proximal gradient methods—demonstrating that minute input perturbations can significantly distort the objective function landscape, thereby shifting convergence points. To model and enhance robustness, the authors unroll a truncated optimization process into a trainable deep network and theoretically prove its architectural compatibility with adversarial training. Experiments confirm the vulnerability of classical optimizers across image reconstruction tasks; after applying the proposed adversarial training, optimization path stability improves markedly, reducing solution deviation by over 60% on average. Key contributions are: (1) establishing the intrinsic adversarial fragility of conventional iterative optimizers; (2) proposing a robustification framework based on deep unrolling, backed by theoretical guarantees; and (3) providing the first systematic empirical analysis framework for adversarial robustness of non-learning optimizers.
📝 Abstract
Machine learning (ML) models are often sensitive to carefully crafted yet seemingly unnoticeable perturbations. Such adversarial examples are considered to be a property of ML models, often associated with their black-box operation and sensitivity to features learned from data. This work examines the adversarial sensitivity of non-learned decision rules, and particularly of iterative optimizers. Our analysis is inspired by the recent developments in deep unfolding, which cast such optimizers as ML models. We show that non-learned iterative optimizers share the sensitivity to adversarial examples of ML models, and that attacking iterative optimizers effectively alters the optimization objective surface in a manner that modifies the minima sought. We then leverage the ability to cast iteration-limited optimizers as ML models to enhance robustness via adversarial training. For a class of proximal gradient optimizers, we rigorously prove how their learning affects adversarial sensitivity. We numerically back our findings, showing the vulnerability of various optimizers, as well as the robustness induced by unfolding and adversarial training.