🤖 AI Summary
Addressing imperceptible adversarial attacks, this work reformulates the classic PGD algorithm from an optimization perspective—without introducing auxiliary modules or additional loss terms—by proposing two synergistic strategies: dynamic step size and adaptive early stopping. The former adaptively adjusts the step size along the gradient direction to approach the minimal perturbation solution near the decision boundary; the latter terminates iterations early based on local convergence criteria to eliminate redundant perturbations. The method strictly preserves the PGD framework while integrating L₂-norm constraints and PSNR-oriented optimization. Evaluated on ResNet-50, it achieves 100% attack success rate, an average L₂ distance of 0.89, PSNR of 52.93, and inference time of only 57 seconds—surpassing all state-of-the-art methods. Crucially, this work provides the first theoretical and empirical validation that pure optimization-level enhancements—not architectural complexity—can substantially outperform sophisticated structural designs in imperceptible adversarial attack generation.
📝 Abstract
Imperceptible adversarial attacks have recently attracted increasing research interests. Existing methods typically incorporate external modules or loss terms other than a simple $l_p$-norm into the attack process to achieve imperceptibility, while we argue that such additional designs may not be necessary. In this paper, we rethink the essence of imperceptible attacks and propose two simple yet effective strategies to unleash the potential of PGD, the common and classical attack, for imperceptibility from an optimization perspective. Specifically, the Dynamic Step Size is introduced to find the optimal solution with minimal attack cost towards the decision boundary of the attacked model, and the Adaptive Early Stop strategy is adopted to reduce the redundant strength of adversarial perturbations to the minimum level. The proposed PGD-Imperceptible (PGD-Imp) attack achieves state-of-the-art results in imperceptible adversarial attacks for both untargeted and targeted scenarios. When performing untargeted attacks against ResNet-50, PGD-Imp attains 100$%$ (+0.3$%$) ASR, 0.89 (-1.76) $l_2$ distance, and 52.93 (+9.2) PSNR with 57s (-371s) running time, significantly outperforming existing methods.