🤖 AI Summary
To address high weight redundancy and poor synergy among multiple compression techniques in neural network compression, this paper proposes a lossless layer fusion method grounded in neuron-level linear behavior. Leveraging the observation that neurons with ReLU-like activations often operate in an activated, approximately linear regime, our approach theoretically models neuron activation patterns to identify fusible layers and performs structured layer merging. Crucially, the method does not rely on weight sparsity and is orthogonal to—and thus seamlessly integrable with—dominant compression techniques such as importance-based pruning. Experiments across multiple benchmark models demonstrate up to 75% parameter reduction (i.e., compressing models to 25% of their original size) without accuracy loss. When combined with pruning, interference is minimal, and overall compression potential is significantly enhanced, establishing a novel paradigm for efficient model compression.
📝 Abstract
In neural network compression, most current methods reduce unnecessary parameters by measuring importance and redundancy. To augment already highly optimized existing solutions, we propose linearity-based compression as a novel way to reduce weights in a neural network. It is based on the intuition that with ReLU-like activation functions, neurons that are almost always activated behave linearly, allowing for merging of subsequent layers. We introduce the theory underlying this compression and evaluate our approach experimentally. Our novel method achieves a lossless compression down to 1/4 of the original model size in over the majority of tested models. Applying our method on already importance-based pruned models shows very little interference between different types of compression, demonstrating the option of successful combination of techniques. Overall, our work lays the foundation for a new type of compression method that enables smaller and ultimately more efficient neural network models.