Optimizing the Adversarial Perturbation with a Momentum-based Adaptive Matrix

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Gradient-based adversarial attacks (e.g., PGD, MI-FGSM) commonly employ sign-based perturbation scaling, leading to poor convergence and optimization instability. Method: Motivated by the fundamental optimization principles, this work establishes, for the first time, the equivalence between PGD and AdaGrad. Building upon this insight, we propose AdaMI—a novel attack algorithm that replaces sign-based scaling with an adaptive matrix, integrating momentum estimation, projected gradient optimization, and cumulative gradient-weighted updates. Contribution/Results: We theoretically prove that AdaMI achieves optimal convergence for convex problems, fully resolving the non-convergence issue inherent in MI-FGSM. Extensive experiments demonstrate that AdaMI significantly improves cross-model transferability—surpassing state-of-the-art methods—while simultaneously enhancing optimization stability and perceptual imperceptibility of the generated adversarial perturbations.

Technology Category

Application Category

📝 Abstract
Generating adversarial examples (AEs) can be formulated as an optimization problem. Among various optimization-based attacks, the gradient-based PGD and the momentum-based MI-FGSM have garnered considerable interest. However, all these attacks use the sign function to scale their perturbations, which raises several theoretical concerns from the point of view of optimization. In this paper, we first reveal that PGD is actually a specific reformulation of the projected gradient method using only the current gradient to determine its step-size. Further, we show that when we utilize a conventional adaptive matrix with the accumulated gradients to scale the perturbation, PGD becomes AdaGrad. Motivated by this analysis, we present a novel momentum-based attack AdaMI, in which the perturbation is optimized with an interesting momentum-based adaptive matrix. AdaMI is proved to attain optimal convergence for convex problems, indicating that it addresses the non-convergence issue of MI-FGSM, thereby ensuring stability of the optimization process. The experiments demonstrate that the proposed momentum-based adaptive matrix can serve as a general and effective technique to boost adversarial transferability over the state-of-the-art methods across different networks while maintaining better stability and imperceptibility.
Problem

Research questions and friction points this paper is trying to address.

Optimizing adversarial perturbation with adaptive matrix
Addressing non-convergence issue in momentum-based attacks
Enhancing adversarial transferability and stability across networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Momentum-based adaptive matrix for perturbation optimization
AdaMI attack ensures optimal convergence for convex problems
General technique boosting adversarial transferability across networks
🔎 Similar Papers
No similar papers found.
Wei Tao
Wei Tao
Huazhong University of Science and Technology
QuantizationLLMTime-Series
Sheng Long
Sheng Long
Ph.D. Candidate, Northwestern University
human computer interactionvisualizationbehavioral science
X
Xin Liu
Jiangxi University of Finance and Economics, Nanchang, 330032, China.
W
Wei Li
Army Arms University of PLA, Hefei, 230031, China.
Q
Qing Tao
Hefei Institute of Technology, Hefei, 238076, China.