Fine-Grained Iterative Adversarial Attacks with Limited Computation Budget

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient attack strength of iterative adversarial attacks under constrained computational resources, this paper proposes a fine-grained activation recomputation mechanism operating jointly across iterations and network layers, dynamically selecting critical activations for recomputation within a fixed FLOPs budget. The method achieves co-optimization of attack efficacy and efficiency without modifying model parameters or architecture. Experiments demonstrate that, under identical FLOPs budgets, the proposed attack improves average success rates by 8.2% on CIFAR-10/100 and ImageNet; moreover, it attains baseline adversarial training robustness—within ±0.3% accuracy difference—using only 30% of the original computational budget, substantially reducing overhead for efficient adversarial training. The core contribution lies in the first formulation of activation recomputation as a joint iteration–layer-wise fine-grained resource allocation problem, accompanied by a scalable, lightweight scheduling strategy.

Technology Category

Application Category

📝 Abstract
This work tackles a critical challenge in AI safety research under limited compute: given a fixed computation budget, how can one maximize the strength of iterative adversarial attacks? Coarsely reducing the number of attack iterations lowers cost but substantially weakens effectiveness. To fulfill the attainable attack efficacy within a constrained budget, we propose a fine-grained control mechanism that selectively recomputes layer activations across both iteration-wise and layer-wise levels. Extensive experiments show that our method consistently outperforms existing baselines at equal cost. Moreover, when integrated into adversarial training, it attains comparable performance with only 30% of the original budget.
Problem

Research questions and friction points this paper is trying to address.

Maximizing adversarial attack strength under limited computation budget
Fine-grained control of layer activations across iterations and layers
Achieving comparable adversarial training performance with reduced budget
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained control mechanism for layer activations
Selective recomputation across iteration and layer levels
Achieves comparable performance with reduced computation budget
🔎 Similar Papers