Prune-then-Quantize or Quantize-then-Prune? Understanding the Impact of Compression Order in Joint Model Compression

๐Ÿ“… 2026-03-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the unclear impact of the execution order between pruning and quantization in joint compression on model performance. It presents the first systematic investigation into this ordering effect and proposes the โ€œprogressive intensity hypothesis,โ€ which posits that weaker perturbations should precede stronger onesโ€”a claim substantiated through theoretical perturbation analysis. Extensive experiments across large language and vision models, including complex scenarios such as multi-stage compression and mixed-precision quantization, validate the universality of this hypothesis. The results demonstrate that adhering to the progressive intensity ordering consistently yields significant performance improvements and exhibits strong generalization across diverse architectures and compression configurations.

Technology Category

Application Category

๐Ÿ“ Abstract
What happens when multiple compression methods are combined-does the order in which they are applied matter? Joint model compression has emerged as a powerful strategy to achieve higher efficiency by combining multiple methods such as pruning and quantization. A central but underexplored factor in joint model compression is the compression order, or the sequence of different methods within the compression pipeline. Most prior studies have either sidestepped the issue by assuming orthogonality between techniques, while a few have examined them only in highly constrained cases. Consequently, the broader role of compression order in shaping model performance remains poorly understood. In this paper, we address the overlooked problem of compression order and provide both theoretical and empirical analysis. We formulate the problem of optimizing the compression order and introduce the Progressive Intensity Hypothesis, which states that weaker perturbations should precede stronger ones. We provide theoretical guarantees showing that the relative benefit of one order increases with the underlying performance gap. Extensive experiments on both language and vision models validate the hypothesis, and further show its generality to broader setups such as multi-stage compression and mixed-precision quantization.
Problem

Research questions and friction points this paper is trying to address.

model compression
compression order
pruning
quantization
joint compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

compression order
joint model compression
pruning
quantization
Progressive Intensity Hypothesis
๐Ÿ”Ž Similar Papers
No similar papers found.