High-Rate Quantized Matrix Multiplication: Theory and Practice

📅 2026-01-23
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the trade-off between accuracy and efficiency in high-rate quantized matrix multiplication for large language models. By leveraging information-theoretic rate-distortion analysis, the authors derive fundamental limits for both joint weight-and-activation quantization and weight-only quantization. They propose WaterSIC, a scheme that dynamically allocates quantization bits based on the water-filling principle, achieving basis-independent near-optimal performance using only scalar integer quantizers. Theoretically, WaterSIC operates within just 0.25 bit per entry of the information-theoretic rate-distortion limit under high-rate conditions. Experiments demonstrate that GPTQ combined with random rotation on Llama-3-8B achieves performance within approximately 0.1 bit of WaterSIC, closely approaching the theoretical optimum.

Technology Category

Application Category

📝 Abstract
This work investigates the problem of quantized matrix multiplication (MatMul), which has become crucial for the efficient deployment of large language models (LLMs). We consider two settings: 1) Generic MatMul, where both matrices must be quantized (weight+activation quantization); and 2) weight-only quantization, where the second matrix is only known through covariance matrix $\Sigma_X$ of its columns. For each setting, we first review the fundamental information-theoretic tradeoff between quantization rate and distortion (high-rate theory), and then analyze the performance of several popular quantization schemes, comparing them to these fundamental limits. Specifically, we discuss rate loss (compared to information theoretic optima) of absmax INT and floating-point (FP) quantization, for which we also derive remarkably accurate heuristic approximations. Weight-only quantization is related to the problem of weighted mean squared error (WMSE) source coding, whose classical (reverse) waterfilling solution dictates how one should distribute rate between coordinates of the vector. We show how waterfilling can be used to improve practical LLM quantization algorithms (GPTQ), which at present allocate rate equally. This new scheme (termed ``WaterSIC'') only uses scalar INT quantizers, but its high-rate performance is basis free (it depends only on the determinant of $\Sigma_X$ and, thus, unlike existing schemes, is immune to applying random rotations) and is within a multiplicative factor of $\frac{2\pi e}{12}$ (or 0.25 bit/entry) of the information-theoretic distortion limit (!). GPTQ's performance is affected by the choice of basis, but for a random rotation and actual $\Sigma_X$ from Llama-3-8B we find GPTQ to be within 0.1 bit (depending on the layer type) of WaterSIC, suggesting that GPTQ with random rotation is also near optimal (for high-rate quantization).
Problem

Research questions and friction points this paper is trying to address.

quantized matrix multiplication
large language models
weight-only quantization
rate-distortion tradeoff
high-rate quantization
Innovation

Methods, ideas, or system contributions that make the work stand out.

quantized matrix multiplication
high-rate quantization
waterfilling
weight-only quantization
information-theoretic limits
🔎 Similar Papers
No similar papers found.