🤖 AI Summary
Existing adversarial robustness-aware pruning methods lack standardized evaluation protocols, hindering fair and systematic comparison. Method: We introduce the first standardized benchmarking framework for robustness-oriented pruning, conducting comprehensive empirical analysis of 32 state-of-the-art methods—including structured/unstructured, sensitivity/gradient-based, and joint robust training paradigms—across CIFAR-10, CIFAR-100, and Tiny-ImageNet. Contribution/Results: We identify a non-monotonic relationship between sparsity and adversarial robustness, leading to a novel robustness-aware pruning taxonomy. Experimental results show that only five methods simultaneously preserve clean accuracy and maintain robustness under PGD and AutoAttack. This work establishes a reproducible benchmark and provides theoretical insights for secure, compressed AI models.