Signal Collapse in One-Shot Pruning: When Sparse Models Fail to Distinguish Neural Representations

πŸ“… 2025-02-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work identifies *activation signal collapse*β€”a drastic decay in layer-wise activation varianceβ€”as the fundamental cause of severe accuracy degradation in one-shot neural network pruning, challenging the conventional attribution to erroneous weight removal. To address this, we propose **REFLOW**, a gradient-free, fine-tuning-free pruning method that requires no weight updates or second-order approximations. REFLOW directly corrects signal distribution via layer-wise variance normalization and rescaling during forward propagation, leveraging only activation statistics. It recovers high-quality sparse subnetworks within the original parameter space. Experiments demonstrate that on ResNeXt-101/ImageNet with 20% remaining weights, REFLOW boosts top-1 accuracy from 4.1% (baseline pruned model) to 78.9%, substantially outperforming state-of-the-art pruning methods. This is the first work to establish signal collapse as the core failure mechanism in one-shot pruning and to introduce a low-overhead, gradient-free paradigm for effective single-step sparsification.

Technology Category

Application Category

πŸ“ Abstract
Neural network pruning is essential for reducing model complexity to enable deployment on resource constrained hardware. While performance loss of pruned networks is often attributed to the removal of critical parameters, we identify signal collapse a reduction in activation variance across layers as the root cause. Existing one shot pruning methods focus on weight selection strategies and rely on computationally expensive second order approximations. In contrast, we demonstrate that mitigating signal collapse, rather than optimizing weight selection, is key to improving accuracy of pruned networks. We propose REFLOW that addresses signal collapse without updating trainable weights, revealing high quality sparse sub networks within the original parameter space. REFLOW enables magnitude pruning to achieve state of the art performance, restoring ResNeXt101 accuracy from under 4.1% to 78.9% on ImageNet with only 20% of the weights retained, surpassing state of the art approaches.
Problem

Research questions and friction points this paper is trying to address.

Addressing signal collapse in neural network pruning
Improving accuracy without updating trainable weights
Enhancing performance of pruned sparse sub-networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mitigates signal collapse
No trainable weight updates
Enhances pruning accuracy
πŸ”Ž Similar Papers
No similar papers found.