A Recovery Guarantee for Sparse Neural Networks

๐Ÿ“… 2025-09-24
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the exact recovery of the original sparse weights in two-layer sparse ReLU neural networks. Existing methods suffer from high memory overhead and lack theoretical guarantees for recovery. To overcome these limitations, we propose a low-memory (linear-space) iterative hard thresholding (IHT) algorithm that jointly enforces sparsity constraints and performs gradient-based updates, enabling provably exact weight recovery without requiring predefined structural priors. To the best of our knowledge, this is the first work to provide rigorous theoretical guarantees for sparse weight recovery in ReLU networks. Experiments on sparse MLP reconstruction, MNIST classification, and implicit neural representations demonstrate that our method matches or surpasses the performance of memory-intensive iterative magnitude pruning baselinesโ€”while reducing memory consumption by an order of magnitude.

Technology Category

Application Category

๐Ÿ“ Abstract
We prove the first guarantees of sparse recovery for ReLU neural networks, where the sparse network weights constitute the signal to be recovered. Specifically, we study structural properties of the sparse network weights for two-layer, scalar-output networks under which a simple iterative hard thresholding algorithm recovers these weights exactly, using memory that grows linearly in the number of nonzero weights. We validate this theoretical result with simple experiments on recovery of sparse planted MLPs, MNIST classification, and implicit neural representations. Experimentally, we find performance that is competitive with, and often exceeds, a high-performing but memory-inefficient baseline based on iterative magnitude pruning.
Problem

Research questions and friction points this paper is trying to address.

Proving sparse recovery guarantees for ReLU neural networks
Developing memory-efficient algorithm for recovering sparse weights
Validating recovery performance against memory-inefficient pruning baseline
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proves sparse recovery guarantees for ReLU networks
Uses iterative hard thresholding algorithm for recovery
Achieves linear memory growth with nonzero weights
๐Ÿ”Ž Similar Papers
No similar papers found.