HO-FMN: Hyperparameter Optimization for Fast Minimum-Norm Attacks

📅 2024-07-11
🏛️ Neurocomputing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing gradient-based adversarial attack methods—relying on fixed loss functions, optimizers, and hyperparameters—often overestimate model robustness and suffer from inefficient minimal-norm perturbation search and high hyperparameter sensitivity. To address these limitations, this work introduces Bayesian hyperparameter optimization into the Fast Minimum-Norm (FMN) attack framework for the first time, establishing an end-to-end automated tuning pipeline that jointly optimizes the attack objective, step-size scheduling, and gradient update mechanism. Evaluated on CIFAR-10, CIFAR-100, and ImageNet, our method achieves a 3.2× speedup over baselines including PGD and AutoAttack, while reducing average L₂ perturbation magnitude by 18%, significantly improving the trade-off between attack efficiency and perturbation minimization. The core innovation lies in deeply embedding Bayesian optimization within the inner loop of adversarial attack generation, enabling adaptive and fine-grained calibration of robustness evaluation.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Optimizing hyperparameters for minimum-norm adversarial attacks
Addressing overly-optimistic robustness evaluations in ML models
Providing dynamic adjustment of attack parameters and loss functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic hyperparameter optimization for adversarial attacks
Parametric variation of fast minimum-norm algorithm
Adjustable loss, optimizer, and scheduler components
🔎 Similar Papers