🤖 AI Summary
Existing gradient-based adversarial attack methods—relying on fixed loss functions, optimizers, and hyperparameters—often overestimate model robustness and suffer from inefficient minimal-norm perturbation search and high hyperparameter sensitivity. To address these limitations, this work introduces Bayesian hyperparameter optimization into the Fast Minimum-Norm (FMN) attack framework for the first time, establishing an end-to-end automated tuning pipeline that jointly optimizes the attack objective, step-size scheduling, and gradient update mechanism. Evaluated on CIFAR-10, CIFAR-100, and ImageNet, our method achieves a 3.2× speedup over baselines including PGD and AutoAttack, while reducing average L₂ perturbation magnitude by 18%, significantly improving the trade-off between attack efficiency and perturbation minimization. The core innovation lies in deeply embedding Bayesian optimization within the inner loop of adversarial attack generation, enabling adaptive and fine-grained calibration of robustness evaluation.