🤖 AI Summary
To address significant performance degradation in post-training quantization (PTQ) of large language models (LLMs) at ultra-low bit-widths—particularly 4-bit—where joint quantization of weights and activations degrades accuracy, this paper proposes NestQuant: the first PTQ framework to instantiate information-theoretically optimal nested lattice quantization via an efficient Gosset lattice implementation. NestQuant unifies the modeling of weights, activations, and KV cache, and is compatible with all matrix multiplication operations—including self-attention and MLP layers. Applied to Llama-3-8B, it achieves full 4-bit quantization with a Wikitext-2 perplexity of 6.6, reducing the accuracy gap relative to the FP16 baseline by over 55%. Quantization-induced degradation is substantially mitigated across multiple benchmarks. The core contribution lies in the first practical, low-complexity construction of nested lattices for LLM quantization, thereby breaking the precision bottleneck in ultra-low-bit LLM PTQ.
📝 Abstract
Post-training quantization (PTQ) has emerged as a critical technique for efficient deployment of large language models (LLMs). This work proposes NestQuant, a novel PTQ scheme for weights and activations that is based on self-similar nested lattices. Recent work have mathematically shown such quantizers to be information-theoretically optimal for low-precision matrix multiplication. We implement a practical low-complexity version of NestQuant based on Gosset lattice, making it a drop-in quantizer for any matrix multiplication step (e.g., in self-attention, MLP etc). For example, NestQuant quantizes weights, KV-cache, and activations of Llama-3-8B to 4 bits, achieving perplexity of 6.6 on wikitext2. This represents more than 55% reduction in perplexity gap with respect to unquantized model (perplexity of 6.14) compared to state-of-the-art Meta's SpinQuant (perplexity 7.3). Comparisons on various LLM evaluation benchmarks also show a reduction in performance degradation induced by quantization.