🤖 AI Summary
This work addresses the information-theoretic bottleneck in large language model (LLM) compression imposed by scalar quantization, which existing vector quantization approaches often circumvent using explicit codebooks or costly lookup tables. The paper introduces, for the first time, the 24-dimensional Leech lattice into LLM compression, leveraging its optimal sphere-packing properties and construction via the extended Golay code to enable codebook-free vector quantization. The proposed method supports efficient indexing, joint multi-shell angular search, and fully parallelizable dequantization. Experimental results demonstrate that this approach surpasses state-of-the-art methods—including Quip#, QTIP, and PVQ—in compression performance, achieving a new state of the art.
📝 Abstract
Scalar quantization of large language models (LLMs) is fundamentally limited by information-theoretic bounds. While vector quantization (VQ) overcomes these limits by encoding blocks of parameters jointly, practical implementations must avoid the need for expensive lookup mechanisms or other explicit codebook storage. Lattice approaches address this through highly structured and dense packing. This paper explores the Leech lattice, which, with its optimal sphere packing and kissing configurations at 24 dimensions, is the highest dimensional lattice known with such optimal properties. To make the Leech lattice usable for LLM quantization, we extend an existing search algorithm based on the extended Golay code construction, to i) support indexing, enabling conversion to and from bitstrings without materializing the codebook, ii) allow angular search over union of Leech lattice shells, iii) propose fully-parallelisable dequantization kernel. Together this yields a practical algorithm, namely Leech Lattice Vector Quantization (LLVQ). LLVQ delivers state-of-the-art LLM quantization performance, outperforming recent methods such as Quip\#, QTIP, and PVQ. These results highlight the importance of high-dimensional lattices for scalable, theoretically grounded model compression.