Adaptive Pruning with Module Robustness Sensitivity: Balancing Compression and Robustness

📅 2024-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional neural network pruning methods typically rely on weight magnitudes, neglecting the joint optimization of adversarial robustness and accuracy—leading to substantial degradation in robustness after compression. To address this, we propose Module-level Robust Sensitivity (MRS), a novel metric that explicitly models each module’s sensitivity to adversarial perturbations as a structured pruning criterion, enabling robustness-aware adaptive pruning. Building upon MRS, we design the Module Robust Pruning and Fine-tuning (MRPF) algorithm, compatible with mainstream adversarial training paradigms and architectures including ResNet, VGG, and MobileViT. Evaluated on SVHN, CIFAR-10/100, and Tiny-ImageNet, MRPF achieves high clean accuracy and low computational overhead while improving PGD-attack accuracy by 3.2–7.8% across benchmarks—outperforming all existing robust pruning approaches.

Technology Category

Application Category

📝 Abstract
Neural network pruning has traditionally focused on weight-based criteria to achieve model compression, frequently overlooking the crucial balance between adversarial robustness and accuracy. Existing approaches often fail to preserve robustness in pruned networks, leaving them more susceptible to adversarial attacks. This paper introduces Module Robustness Sensitivity (MRS), a novel metric that quantifies layer-wise sensitivity to adversarial perturbations and dynamically informs pruning decisions. Leveraging MRS, we propose Module Robust Pruning and Fine-Tuning (MRPF), an adaptive pruning algorithm compatible with any adversarial training method, offering both flexibility and scalability. Extensive experiments on SVHN, CIFAR, and Tiny-ImageNet across diverse architectures, including ResNet, VGG, and MobileViT, demonstrate that MRPF significantly enhances adversarial robustness while maintaining competitive accuracy and computational efficiency. Furthermore, MRPF consistently outperforms state-of-the-art structured pruning methods in balancing robustness, accuracy, and compression. This work establishes a practical and generalizable framework for robust pruning, addressing the long-standing trade-off between model compression and robustness preservation.
Problem

Research questions and friction points this paper is trying to address.

Balances adversarial robustness and accuracy in neural network pruning.
Introduces Module Robustness Sensitivity to guide pruning decisions.
Proposes MRPF for robust pruning across diverse architectures.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Module Robustness Sensitivity (MRS) metric
Proposes Module Robust Pruning and Fine-Tuning (MRPF)
Enhances robustness, accuracy, and compression balance
🔎 Similar Papers
No similar papers found.