🤖 AI Summary
The increasing scale of deep neural networks intensifies the need for model compression, yet conventional techniques—such as pruning, quantization, and low-rank decomposition—often incur substantial accuracy degradation. To address this, we propose a smooth-transition unified compression framework that concurrently executes the original and compressed models, employing a progressive contribution transfer mechanism to dynamically reweight their outputs, thereby enabling stable knowledge distillation. Our approach integrates model-parallel fine-tuning, progressive weight decay, and multi-strategy compression co-optimization, ensuring compatibility with diverse compression methods. Extensive experiments across computer vision and natural language processing benchmarks demonstrate that our method achieves an average accuracy improvement of over 3% relative to baseline compression approaches, with gains reaching up to 20% in certain configurations. This significantly enhances compression stability, adaptability, and generalization across architectures and tasks.
📝 Abstract
The increasing scale of deep neural networks has led to a growing need for compression techniques such as pruning, quantization, and low-rank decomposition. While these methods are very effective in reducing memory, computation and energy consumption, they often introduce severe accuracy degradation when applied directly. We introduce Vanishing Contributions (VCON), a general approach for smoothly transitioning neural models into compressed form. Rather than replacing the original network directly with its compressed version, VCON executes the two in parallel during fine-tuning. The contribution of the original (uncompressed) model is progressively reduced, while that of the compressed model is gradually increased. This smooth transition allows the network to adapt over time, improving stability and mitigating accuracy degradation. We evaluate VCON across computer vision and natural language processing benchmarks, in combination with multiple compression strategies. Across all scenarios, VCON leads to consistent improvements: typical gains exceed 3%, while some configuration exhibits accuracy boosts of 20%. VCON thus provides a generalizable method that can be applied to the existing compression techniques, with evidence of consistent gains across multiple benchmarks.