FPTQuant: Function-Preserving Transforms for LLM Quantization

πŸ“… 2025-06-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the severe accuracy degradation in large language model (LLM) inference caused by outlier-sensitive INT4 quantization, this paper proposes Function-Preserving Transformations (FPTs)β€”a lightweight, fusion-friendly, and dynamic suite of four transformations: pre-RoPE Q/K transformation, V transformation, MLP internal scaling, and dynamic scaling. Leveraging Transformer equivariance and channel-wise independence, FPTs reshape activation distributions to enable robust static INT4 quantization. The method supports end-to-end joint local fine-tuning without custom operators. Preserving hardware compatibility, it achieves INT4 accuracy approaching FP16 while delivering 3.9Γ— higher inference throughput than FP16β€”surpassing most state-of-the-art quantization methods. Compared to the current fastest approach, it incurs only a 29% latency overhead yet attains superior accuracy.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) require substantial compute, and thus energy, at inference time. While quantizing weights and activations is effective at improving efficiency, naive quantization of LLMs can significantly degrade performance due to large magnitude outliers. This paper describes FPTQuant, which introduces four novel, lightweight, and expressive function-preserving transforms (FPTs) to facilitate quantization of transformers: (1) a mergeable pre-RoPE transform for queries and keys, (2) a mergeable transform for values, (3) a mergeable scaling transform within the MLP block, and (4) a cheap, dynamic scaling transform. By leveraging the equivariances and independencies inherent to canonical transformer operation, we designed these FPTs to maintain the model's function while shaping the intermediate activation distributions to be more quantization friendly. FPTQuant requires no custom kernels and adds virtually no overhead during inference. The FPTs are trained both locally to reduce outliers, and end-to-end such that the outputs of the quantized and full-precision models match. FPTQuant enables static INT4 quantization with minimal overhead and shows SOTA speed-up of up to 3.9 times over FP. Empirically, FPTQuant has an excellent accuracy-speed trade-off -- it is performing on par or exceeding most prior work and only shows slightly lower accuracy compared to a method that is up to 29% slower.
Problem

Research questions and friction points this paper is trying to address.

Reducing LLM inference compute and energy costs
Addressing performance degradation from naive quantization
Enabling efficient INT4 quantization with minimal overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mergeable pre-RoPE transform for queries and keys
Mergeable scaling transform within MLP block
Dynamic scaling transform for efficient quantization
πŸ”Ž Similar Papers
No similar papers found.