🤖 AI Summary
To address structural disconnection and layer collapse in neural network pruning, this paper proposes the differentiable CoNNect regularizer, which— for the first time—incorporates graph connectivity constraints into an L₀-norm approximation regularization framework, explicitly preserving input-to-output path connectivity during optimization. Theoretically, CoNNect guarantees that the pruned network retains the largest connected subgraph structure. It supports both structured and unstructured pruning without modifying the base training pipeline. Its graph-based differentiable regularization term is plug-and-play compatible with mainstream pruning frameworks such as DepGraph and LLM-pruner. Experiments across multiple benchmarks demonstrate that CoNNect significantly improves convergence in iterative pruning and boosts the average accuracy of one-shot pruners by 1.2–2.8% at equivalent sparsity levels.
📝 Abstract
Pruning encompasses a range of techniques aimed at increasing the sparsity of neural networks (NNs). These techniques can generally be framed as minimizing a loss function subject to an $L_0$-norm constraint. This paper introduces CoNNect, a novel differentiable regularizer for sparse NN training that ensures connectivity between input and output layers. CoNNect integrates with established pruning strategies and supports both structured and unstructured pruning. We proof that CoNNect approximates $L_0$-regularization, guaranteeing maximally connected network structures while avoiding issues like layer collapse. Numerical experiments demonstrate that CoNNect improves classical pruning strategies and enhances state-of-the-art one-shot pruners, such as DepGraph and LLM-pruner.