🤖 AI Summary
To address the high memory overhead and low retrieval accuracy of KV caches in long-context LLM inference, this paper proposes a retrievable KV cache compression method based on semantic clustering. Unlike conventional position-based paging or irreversible pruning strategies, our approach manages caches at the granularity of semantic clusters, integrating dynamic cache selection, lightweight indexing, and hierarchical cache management to enable fine-grained, high-fidelity real-time semantic retrieval and reconstruction. Evaluated on 32K-context workloads, the method achieves negligible accuracy degradation while operating under tight KV cache budgets of only 1K–2K entries. It reduces inference latency by 2× and improves decoding throughput by 2.5×, significantly outperforming existing retrievable compression approaches.
📝 Abstract
Large Language Models (LLMs) have been widely deployed in a variety of applications, and the context length is rapidly increasing to handle tasks such as long-document QA and complex logical reasoning. However, long context poses significant challenges for inference efficiency, including high memory costs of key-value (KV) cache and increased latency due to extensive memory accesses. Recent works have proposed compressing KV cache to approximate computation, but these methods either evict tokens permanently, never recalling them for later inference, or recall previous tokens at the granularity of pages divided by textual positions. Both approaches degrade the model accuracy and output quality. To achieve efficient and accurate recallable KV cache compression, we introduce ClusterKV, which recalls tokens at the granularity of semantic clusters. We design and implement efficient algorithms and systems for clustering, selection, indexing and caching. Experiment results show that ClusterKV attains negligible accuracy loss across various tasks with 32k context lengths, using only a 1k to 2k KV cache budget, and achieves up to a 2$ imes$ speedup in latency and a 2.5$ imes$ improvement in decoding throughput. Compared to SoTA recallable KV compression methods, ClusterKV demonstrates higher model accuracy and output quality, while maintaining or exceeding inference efficiency.