🤖 AI Summary
This work addresses the challenge of jointly optimizing quantization and pruning in post-training neural network compression. We propose SPFQ+, the first unified stochastic path-following framework that jointly models ultra-low-bit quantization (down to 1-bit) and structured pruning. Our method introduces learnable scaling parameters and a generalized stochastic operator to embed structured sparsity constraints directly into stochastic optimization. Theoretically, we establish the first rigorous error upper bounds for quantization, pruning, and their joint operation, and further design a robust error correction mechanism. Experiments across multiple models and datasets demonstrate that SPFQ+ achieves high-accuracy joint compression at 1–4 bits, with theoretical error bounds substantially tighter than those of standalone quantization or pruning. Moreover, it delivers over 2.3× inference speedup while maintaining accuracy, efficiency, and formal theoretical guarantees.
📝 Abstract
Quantization and pruning are two essential techniques for compressing neural networks, yet they are often treated independently, with limited theoretical analysis connecting them. This paper introduces a unified framework for post-training quantization and pruning using stochastic path-following algorithms. Our approach builds on the Stochastic Path Following Quantization (SPFQ) method, extending its applicability to pruning and low-bit quantization, including challenging 1-bit regimes. By incorporating a scaling parameter and generalizing the stochastic operator, the proposed method achieves robust error correction and yields rigorous theoretical error bounds for both quantization and pruning as well as their combination.