๐ค AI Summary
This work addresses the unclear impact of the execution order between pruning and quantization in joint compression on model performance. It presents the first systematic investigation into this ordering effect and proposes the โprogressive intensity hypothesis,โ which posits that weaker perturbations should precede stronger onesโa claim substantiated through theoretical perturbation analysis. Extensive experiments across large language and vision models, including complex scenarios such as multi-stage compression and mixed-precision quantization, validate the universality of this hypothesis. The results demonstrate that adhering to the progressive intensity ordering consistently yields significant performance improvements and exhibits strong generalization across diverse architectures and compression configurations.
๐ Abstract
What happens when multiple compression methods are combined-does the order in which they are applied matter? Joint model compression has emerged as a powerful strategy to achieve higher efficiency by combining multiple methods such as pruning and quantization. A central but underexplored factor in joint model compression is the compression order, or the sequence of different methods within the compression pipeline. Most prior studies have either sidestepped the issue by assuming orthogonality between techniques, while a few have examined them only in highly constrained cases. Consequently, the broader role of compression order in shaping model performance remains poorly understood. In this paper, we address the overlooked problem of compression order and provide both theoretical and empirical analysis. We formulate the problem of optimizing the compression order and introduce the Progressive Intensity Hypothesis, which states that weaker perturbations should precede stronger ones. We provide theoretical guarantees showing that the relative benefit of one order increases with the underlying performance gap. Extensive experiments on both language and vision models validate the hypothesis, and further show its generality to broader setups such as multi-stage compression and mixed-precision quantization.