Where and How to Enhance: Discovering Bit-Width Contribution for Mixed Precision Quantization

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing mixed-precision quantization (MPQ) methods typically rely on gradient-based optimization to allocate bitwidths, implicitly assuming that the magnitude of quantization parameters reflects their contribution to model accuracy—an assumption lacking theoretical grounding and empirical validation. Method: We argue that bitwidth contribution should be defined causally by its impact on downstream task performance, not parameter magnitude. To this end, we introduce Shapley values—rooted in cooperative game theory—to rigorously quantify the marginal contribution of each layer’s or operator’s bitwidth to overall model accuracy. We employ Monte Carlo sampling for efficient approximation, circumventing exhaustive search. Contribution/Results: Evaluated on mainstream vision benchmarks, our approach significantly outperforms gradient-based MPQ methods, achieving superior accuracy–efficiency trade-offs under comparable computational cost. This work establishes a novel, interpretable, and verifiable paradigm for bitwidth allocation in quantized neural networks.

Technology Category

Application Category

📝 Abstract
Mixed precision quantization (MPQ) is an effective quantization approach to achieve accuracy-complexity trade-off of neural network, through assigning different bit-widths to network activations and weights in each layer. The typical way of existing MPQ methods is to optimize quantization policies (i.e., bit-width allocation) in a gradient descent manner, termed as Differentiable (DMPQ). At the end of the search, the bit-width associated to the quantization parameters which has the largest value will be selected to form the final mixed precision quantization policy, with the implicit assumption that the values of quantization parameters reflect the operation contribution to the accuracy improvement. While much has been discussed about the MPQ improvement, the bit-width selection process has received little attention. We study this problem and argue that the magnitude of quantization parameters does not necessarily reflect the actual contribution of the bit-width to the task performance. Then, we propose a Shapley-based MPQ (SMPQ) method, which measures the bit-width operation direct contribution on the MPQ task. To reduce computation cost, a Monte Carlo sampling-based approximation strategy is proposed for Shapley computation. Extensive experiments on mainstream benchmarks demonstrate that our SMPQ consistently achieves state-of-the-art performance than gradient-based competitors.
Problem

Research questions and friction points this paper is trying to address.

Determining actual bit-width contribution in mixed precision quantization
Challenging assumption that quantization parameter values reflect accuracy impact
Proposing Shapley-based method for accurate bit-width contribution measurement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shapley-based MPQ method for bit-width contribution
Monte Carlo sampling for Shapley computation
Gradient-free bit-width selection for accuracy
🔎 Similar Papers
No similar papers found.