🤖 AI Summary
To address the prohibitively large KV cache memory overhead (up to several gigabytes) in long-context inference for Transformer-based large language models (LLMs), this paper proposes a dynamic, scalable, lightweight lossy compression framework. The framework deeply integrates LLM-aware cache characteristics and innovatively combines block-wise partitioning, quantization, and sparsification—co-optimized with attention computation kernels to significantly reduce data movement costs. Compared to state-of-the-art methods, it achieves an average 47% reduction—and up to 83% peak reduction—in KV cache memory usage, with negligible accuracy degradation. Decompression is highly efficient: in certain scenarios, it even accelerates matrix-vector operations, outperforming the native cuBLAS attention kernel in end-to-end latency.
📝 Abstract
Transformer-based large language models (LLMs) demonstrate impressive potential in various practical applications. However, long context inference poses a significant challenge due to the enormous memory requirements of the key-value (KV) cache, which can scale to multiple gigabytes as sequence length and batch size increase. In this paper, we present KVComp, a generic and efficient KV cache management framework optimized for long-text generation that synergistically works with both latency-critical and throughput-critical inference systems. KVComp employs novel lossy compression techniques specifically designed for KV cache data characteristics, featuring careful co-design of compression algorithms and system architecture. Our approach maintains compatibility with the growing nature of KV cache while preserving high computational efficiency. Experimental results show that KVComp achieves on average 47% and up to 83% higher memory reduction rate compared to existing methods with little/no model accuracy degradation. Furthermore, KVComp achieves extremely high execution throughput, effectively reducing decompression overhead and, in some cases, even accelerating the matrix-vector multiplication operation and outperform cuBLAS-based attention kernels with less data movement.