Efficient Long-Context LLM Inference via KV Cache Clustering

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the KV cache memory explosion in long-context reasoning, this paper proposes an online chunked soft-matching clustering method: sequences are dynamically partitioned into chunks, and key-value pairs within each chunk are softly matched and merged toward learned centroids, enabling efficient compression of KV states. We introduce an alternating chunked clustering strategy—the first of its kind—with theoretical guarantees of optimality. The method requires no model fine-tuning and is fully compatible with existing transformer architectures. Experiments demonstrate that, while preserving near-lossless performance on long-context tasks (average accuracy drop <0.3%), it reduces KV memory consumption by 80%, accelerates decoding by 3.19×, and decreases end-to-end latency by 2.72×. Our approach achieves a remarkable balance among computational efficiency, compression fidelity, and deployment practicality.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) with extended context windows have become increasingly prevalent for tackling complex tasks. However, the substantial Key-Value (KV) cache required for long-context LLMs poses significant deployment challenges. Existing approaches either discard potentially critical information needed for future generations or offer limited efficiency gains due to high computational overhead. In this paper, we introduce Chelsea, a simple yet effective framework for online KV cache clustering. Our approach is based on the observation that key states exhibit high similarity along the sequence dimension. To enable efficient clustering, we divide the sequence into chunks and propose Chunked Soft Matching, which employs an alternating partition strategy within each chunk and identifies clusters based on similarity. Chelsea then merges the KV cache within each cluster into a single centroid. Additionally, we provide a theoretical analysis of the computational complexity and the optimality of the intra-chunk partitioning strategy. Extensive experiments across various models and long-context benchmarks demonstrate that Chelsea achieves up to 80% reduction in KV cache memory usage while maintaining comparable model performance. Moreover, with minimal computational overhead, Chelsea accelerates the decoding stage of inference by up to 3.19$ imes$ and reduces end-to-end latency by up to 2.72$ imes$.
Problem

Research questions and friction points this paper is trying to address.

Reduces KV cache memory usage in long-context LLMs
Improves inference speed with minimal computational overhead
Maintains model performance while optimizing cache efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online KV cache clustering for efficiency
Chunked Soft Matching for similarity-based clustering
Merges KV cache into centroids to reduce memory
🔎 Similar Papers
No similar papers found.
J
Jie Hu
Peking University
S
Shengnan Wang
Huawei Technologies
Y
Yutong He
Peking University
P
Ping Gong
University of Science and Technology of China
Jiawei Yi
Jiawei Yi
University of Science and Technology of China
AI System
J
Juncheng Zhang
University of Science and Technology of China
Y
Youhui Bai
Huawei Technologies
Renhai Chen
Renhai Chen
Tianjin University
G
Gong Zhang
Huawei Technologies
C
Cheng Li
University of Science and Technology of China
K
Kun Yuan
Peking University