AMS-QUANT: Adaptive Mantissa Sharing for Floating-point Quantization

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitation of floating-point quantization in large language model (LLM) inference—where fixed-integer bit-width constraints hinder optimal trade-offs between compression and accuracy—this paper proposes an adaptive mantissa-sharing quantization method tailored for floating-point representations. It pioneers non-integer-bit-width floating-point quantization (e.g., FP5.33-e2m3, FP4.25-e2m2), jointly optimizing quantization parameters via dynamic mantissa bit sharing and offline adaptive search. An efficient CUDA linear kernel is further designed to minimize memory access overhead. Experiments demonstrate that, compared to FP16, the method achieves 2.8–3.2× inference speedup with negligible accuracy degradation, effectively alleviating both memory and computational bottlenecks. This work establishes a novel paradigm for efficient LLM deployment through fine-grained, hardware-aware floating-point quantization.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in various kinds of tasks, while the billion or even trillion parameters bring storage and efficiency bottlenecks for inference. Quantization, particularly floating-point quantization, is known to be capable of speeding up LLM inference by reducing memory footprint and data movement during the inference process. For the first time, we advance the floating-point quantization exploration from integer bitwidths to non-integer bit-widths, namely AMS-Quant, to further approach the quantization sweet spot. AMS-Quant incorporates two novel techniques to put it into effect: (1) it proposes Mantissa-bit Sharing, which groups k quantized weights and lets them share the least significant mantissa bit, allowing us to further approach the minimum quantization bit-width without accuracy loss. (2) It introduces Adaptive Searching, which employs an offline optimization strategy to minimize the accuracy degradation introduced by sharing. Moreover, AMS-Quant is also prototyped as efficient CUDA Linear kernels, which translates memory savings into wall-clock latency reduction by reducing memory access. Extensive experiments on large-scale datasets and models show that AMS-Quant can quantize the model to FP-5.33-e2m3 and FP4.25-e2m2, and significantly speed up the LLM decoding over FP16 inference (2.8x and 3.2x), with negligible accuracy loss.
Problem

Research questions and friction points this paper is trying to address.

Reducing memory footprint and data movement in LLM inference
Achieving non-integer bit-widths for floating-point quantization
Minimizing accuracy loss while accelerating decoding speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mantissa-bit sharing reduces quantization bit-width
Adaptive searching minimizes accuracy degradation
CUDA kernels translate memory savings to speedup
🔎 Similar Papers
M
Mengtao Lv
Huawei Inc.
Ruiqi Zhu
Ruiqi Zhu
King's College London
Reinforcement LearningMachine LearningSurgical Robots
X
Xinyu Wang
Huawei Inc.
Y
Yun Li
Huawei Inc.