XQuant: Breaking the Memory Wall for LLM Inference with KV Cache Rematerialization

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Memory bandwidth bottlenecks severely limit GPU computational throughput during large language model (LLM) inference. To address this, we propose XQuant and XQuant-CL: XQuant employs low-bit quantization of input activations, dynamic recomputation of key-value (KV) caches, and cross-layer activation similarity modeling to achieve extreme memory compression. It attains up to 7.7× memory reduction while preserving near-FP16 accuracy—perplexity increases by less than 0.1. XQuant-CL further incorporates inter-layer redundancy modeling, achieving 12.5× KV cache compression with only a 0.01 perplexity degradation. Both methods require no architectural modifications or retraining, enabling seamless deployment. They significantly improve the memory–computation trade-off, offering an efficient hardware–software co-design solution for high-throughput, low-latency LLM inference.

Technology Category

Application Category

📝 Abstract
Although LLM inference has emerged as a critical workload for many downstream applications, efficiently inferring LLMs is challenging due to the substantial memory footprint and bandwidth requirements. In parallel, compute capabilities have steadily outpaced both memory capacity and bandwidth over the last few decades, a trend that remains evident in modern GPU hardware and exacerbates the challenge of LLM inference. As such, new algorithms are emerging that trade increased computation for reduced memory operations. To that end, we present XQuant, which takes advantage of this trend, enabling an order-of-magnitude reduction in memory consumption through low-bit quantization with substantial accuracy benefits relative to state-of-the-art KV cache quantization methods. We accomplish this by quantizing and caching the layer input activations X, instead of using standard KV caching, and then rematerializing the Keys and Values on-the-fly during inference. This results in an immediate 2$ imes$ memory savings compared to KV caching. By applying XQuant, we achieve up to $sim 7.7 imes$ memory savings with $<0.1$ perplexity degradation compared to the FP16 baseline. Furthermore, our approach leverages the fact that X values are similar across layers. Building on this observation, we introduce XQuant-CL, which exploits the cross-layer similarity in the X embeddings for extreme compression. Across different models, XQuant-CL attains up to 10$ imes$ memory savings relative to the FP16 baseline with only 0.01 perplexity degradation, and 12.5$ imes$ memory savings with only $0.1$ perplexity degradation. XQuant exploits the rapidly increasing compute capabilities of hardware platforms to eliminate the memory bottleneck, while surpassing state-of-the-art KV cache quantization methods and achieving near-FP16 accuracy across a wide range of models.
Problem

Research questions and friction points this paper is trying to address.

Reducing memory consumption for LLM inference via KV cache rematerialization
Overcoming memory bandwidth limitations with low-bit quantization techniques
Leveraging cross-layer similarity for extreme compression without accuracy loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses low-bit quantization for KV cache
Rematerializes Keys and Values on-the-fly
Exploits cross-layer similarity for compression
🔎 Similar Papers