🤖 AI Summary
Matrix multiplication in large language models faces a memory bandwidth bottleneck.
Method: This paper proposes a rate-distortion quantization framework for the matrix product $A^ op B$, rather than quantizing individual matrices. It establishes, for the first time, a non-asymptotic lower bound on quantization error for matrix products; designs a nested-lattice-based universal quantizer yielding explicit Frobenius-norm error guarantees for arbitrary inputs; and characterizes the rate-distortion function for Gaussian matrix products, revealing a phase transition at $R approx 0.906$ bits per entry.
Contribution/Results: The framework achieves the information-theoretic lower bound and is asymptotically optimal for Gaussian inputs. A low-complexity practical variant closely approaches this optimum. This work provides the first quantization paradigm for matrix multiplication with rigorous statistical guarantees and precise phase-transition characterization, enabling principled accuracy–efficiency trade-offs.
📝 Abstract
Recent work in machine learning community proposed multiple methods for performing lossy compression (quantization) of large matrices. This quantization is important for accelerating matrix multiplication (main component of large language models), which is often bottlenecked by the speed of loading these matrices from memory. Unlike classical vector quantization and rate-distortion theory, the goal of these new compression algorithms is to be able to approximate not the matrices themselves, but their matrix product. Specifically, given a pair of real matrices $A,B$ an encoder (compressor) is applied to each of them independently producing descriptions with $R$ bits per entry. These representations subsequently are used by the decoder to estimate matrix product $A^ op B$. In this work, we provide a non-asymptotic lower bound on the mean squared error of this approximation (as a function of rate $R$) for the case of matrices $A,B$ with iid Gaussian entries. Algorithmically, we construct a universal quantizer based on nested lattices with an explicit guarantee of approximation error for any (non-random) pair of matrices $A$, $B$ in terms of only Frobenius norms $|ar{A}|_F, |ar{B}|_F$ and $|ar{A}^ op ar{B}|_F$, where $ar{A},ar{B}$ are versions of $A,B$ with zero-centered columns, respectively. For iid Gaussian matrices our quantizer achieves the lower bound and is, thus, asymptotically optimal. A practical low-complexity version of our quantizer achieves performance quite close to optimal. In addition, we derive rate-distortion function for matrix multiplication of iid Gaussian matrices, which exhibits an interesting phase-transition at $Rapprox 0.906$ bit/entry.