Pushing the Limits of Sparsity: A Bag of Tricks for Extreme Pruning

📅 2024-11-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural networks suffer from severe accuracy degradation under extreme sparsity (99.90%–99.99%) due to gradient instability and layer collapse. To address this, we propose Extreme Adaptive Sparse Training (EAST), a novel sparse training framework. EAST introduces three synergistic mechanisms: (i) dynamic ReLU phase switching to enhance gradient flow robustness; (ii) intra-residual structured weight sharing to alleviate parameter scarcity; and (iii) cyclically evolving sparsity levels and sparsity patterns for adaptive, stability-preserving updates. Crucially, EAST requires neither retraining nor dense initialization. On CIFAR-10/100 and ImageNet, ResNet-34 and ResNet-50 trained with EAST achieve near-dense-model accuracy at 99.99% sparsity—substantially outperforming state-of-the-art sparse training methods.

Technology Category

Application Category

📝 Abstract
Pruning of deep neural networks has been an effective technique for reducing model size while preserving most of the performance of dense networks, crucial for deploying models on memory and power-constrained devices. While recent sparse learning methods have shown promising performance up to moderate sparsity levels such as 95% and 98%, accuracy quickly deteriorates when pushing sparsities to extreme levels. Obtaining sparse networks at such extreme sparsity levels presents unique challenges, such as fragile gradient flow and heightened risk of layer collapse. In this work, we explore network performance beyond the commonly studied sparsities, and propose a collection of techniques that enable the continuous learning of networks without accuracy collapse even at extreme sparsities, including 99.90%, 99.95% and 99.99% on ResNet architectures. Our approach combines 1) Dynamic ReLU phasing, where DyReLU initially allows for richer parameter exploration before being gradually replaced by standard ReLU, 2) weight sharing which reuses parameters within a residual layer while maintaining the same number of learnable parameters, and 3) cyclic sparsity, where both sparsity levels and sparsity patterns evolve dynamically throughout training to better encourage parameter exploration. We evaluate our method, which we term Extreme Adaptive Sparse Training (EAST) at extreme sparsities using ResNet-34 and ResNet-50 on CIFAR-10, CIFAR-100, and ImageNet, achieving significant performance improvements over state-of-the-art methods we compared with.
Problem

Research questions and friction points this paper is trying to address.

Addresses accuracy collapse in deep neural networks at extreme sparsity levels.
Proposes techniques to maintain performance in highly pruned networks.
Enables continuous learning without degradation at sparsities up to 99.99%.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic ReLU phasing for parameter exploration
Weight sharing within residual layers
Cyclic sparsity for dynamic training adaptation
🔎 Similar Papers
No similar papers found.
Andy Li
Andy Li
Monash University
MAPF
A
A. Durrant
University of Aberdeen (UK)
Milan Markovic
Milan Markovic
Interdisciplinary Fellow in Data & AI - University of Aberdeen, UK
AccountabilityComplianceTransparencyProvenanceSemantic Web
L
Lu Yin
University of Surrey (UK)
G
G. Leontidis
University of Aberdeen (UK)