Adversarial Pruning: A Survey and Benchmark of Pruning Methods for Adversarial Robustness

📅 2024-09-02
🏛️ Pattern Recognition
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing adversarial robustness-aware pruning methods lack standardized evaluation protocols, hindering fair and systematic comparison. Method: We introduce the first standardized benchmarking framework for robustness-oriented pruning, conducting comprehensive empirical analysis of 32 state-of-the-art methods—including structured/unstructured, sensitivity/gradient-based, and joint robust training paradigms—across CIFAR-10, CIFAR-100, and Tiny-ImageNet. Contribution/Results: We identify a non-monotonic relationship between sparsity and adversarial robustness, leading to a novel robustness-aware pruning taxonomy. Experimental results show that only five methods simultaneously preserve clean accuracy and maintain robustness under PGD and AutoAttack. This work establishes a reproducible benchmark and provides theoretical insights for secure, compressed AI models.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Survey and categorize adversarial pruning methods for robustness
Propose fair benchmark to evaluate pruning methods effectively
Empirically analyze top-performing adversarial pruning traits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Survey and taxonomy of adversarial pruning methods
Novel fair evaluation benchmark proposal
Empirical re-evaluation of pruning techniques
🔎 Similar Papers
No similar papers found.