🤖 AI Summary
To address the KV cache memory explosion in long-context reasoning, this paper proposes an online chunked soft-matching clustering method: sequences are dynamically partitioned into chunks, and key-value pairs within each chunk are softly matched and merged toward learned centroids, enabling efficient compression of KV states. We introduce an alternating chunked clustering strategy—the first of its kind—with theoretical guarantees of optimality. The method requires no model fine-tuning and is fully compatible with existing transformer architectures. Experiments demonstrate that, while preserving near-lossless performance on long-context tasks (average accuracy drop <0.3%), it reduces KV memory consumption by 80%, accelerates decoding by 3.19×, and decreases end-to-end latency by 2.72×. Our approach achieves a remarkable balance among computational efficiency, compression fidelity, and deployment practicality.
📝 Abstract
Large language models (LLMs) with extended context windows have become increasingly prevalent for tackling complex tasks. However, the substantial Key-Value (KV) cache required for long-context LLMs poses significant deployment challenges. Existing approaches either discard potentially critical information needed for future generations or offer limited efficiency gains due to high computational overhead. In this paper, we introduce Chelsea, a simple yet effective framework for online KV cache clustering. Our approach is based on the observation that key states exhibit high similarity along the sequence dimension. To enable efficient clustering, we divide the sequence into chunks and propose Chunked Soft Matching, which employs an alternating partition strategy within each chunk and identifies clusters based on similarity. Chelsea then merges the KV cache within each cluster into a single centroid. Additionally, we provide a theoretical analysis of the computational complexity and the optimality of the intra-chunk partitioning strategy. Extensive experiments across various models and long-context benchmarks demonstrate that Chelsea achieves up to 80% reduction in KV cache memory usage while maintaining comparable model performance. Moreover, with minimal computational overhead, Chelsea accelerates the decoding stage of inference by up to 3.19$ imes$ and reduces end-to-end latency by up to 2.72$ imes$.