Why Do Some Inputs Break Low-Bit LLM Quantization?

πŸ“… 2025-05-24
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Low-bit quantization of large language models (LLMs) often suffers from severe performance degradation, yet the underlying input-sensitive failure mechanisms remain poorly understood. Method: This work identifies a causal relationship between residual stream magnitude and cross-layer quantization error amplification. We propose the first residual-magnitude-based quantization error attribution framework, pinpointing precise residual activations in late layers and MLP gating outputs as critical for performance preservation. Our approach integrates layer-wise error localization, early-exit inference, activation patching, multi-method error correlation analysis, and statistical modeling of residual stream dynamics to systematically characterize error propagation. Results: Evaluated across 7B–70B LLMs under 50 distinct 3–4 bit quantization configurations, we observe strong correlation (mean Pearson’s *r* = 0.82) between quantization error and residual magnitude. The framework significantly enhances robustness, interpretability, and controllability of low-bit LLM quantization.

Technology Category

Application Category

πŸ“ Abstract
Low-bit weight-only quantization significantly reduces the memory footprint of large language models (LLMs), but disproportionately affects certain examples. We analyze diverse 3-4 bit methods on LLMs ranging from 7B-70B in size and find that the quantization errors of 50 pairs of methods are strongly correlated (avg. 0.82) on FineWeb examples. Moreover, the residual stream magnitudes of full-precision models are indicative of future quantization errors. We further establish a hypothesis that relates the residual stream magnitudes to error amplification and accumulation over layers. Using LLM localization techniques, early exiting, and activation patching, we show that examples with large errors rely on precise residual activations in the late layers, and that the outputs of MLP gates play a crucial role in maintaining the perplexity. Our work reveals why certain examples result in large quantization errors and which model components are most critical for performance preservation.
Problem

Research questions and friction points this paper is trying to address.

Analyzing why certain inputs cause large errors in low-bit LLM quantization
Investigating how residual stream magnitudes relate to error amplification across layers
Identifying which model components are critical for preserving performance during quantization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzed quantization error correlations across methods
Linked residual stream magnitudes to error amplification
Identified critical MLP gates for perplexity preservation
πŸ”Ž Similar Papers