EMS: Adaptive Evict-then-Merge Strategy for Head-wise KV Cache Compression Based on Global-Local Importance

📅 2024-12-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high memory overhead of KV caches in long-context reasoning for large language models (LLMs), and to overcome limitations of existing compression methods—namely token distribution bias and neglect of inter-head sparsity—this paper proposes a head-level adaptive KV compression framework. Our method introduces: (1) a global-local fused importance scoring mechanism; (2) an Evict-then-Merge dynamic compression framework that explicitly models inter-head sparsity; and (3) a zero-class-driven head-level parallel compression scheme. Evaluated on LongBench, our approach improves average performance by 1.28 points across four mainstream LLMs. Under an extreme cache budget of 256 tokens, it achieves optimal perplexity. Moreover, on the Needle-in-a-Haystack retrieval task, it attains 95% accuracy while caching less than 2% of the full context length.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) continue to advance, the demand for higher quality and faster processing of long contexts across various applications is growing. KV cache is widely adopted as it stores previously generated key and value tokens, effectively reducing redundant computations during inference. However, as memory overhead becomes a significant concern, efficient compression of KV cache has gained increasing attention. Most existing methods perform compression from two perspectives: identifying important tokens and designing compression strategies. However, these approaches often produce biased distributions of important tokens due to the influence of accumulated attention scores or positional encoding. Furthermore, they overlook the sparsity and redundancy across different heads, which leads to difficulties in preserving the most effective information at the head level. To this end, we propose EMS to overcome these limitations, while achieving better KV cache compression under extreme compression ratios. Specifically, we introduce a Global-Local score that combines accumulated attention scores from both global and local KV tokens to better identify the token importance. For the compression strategy, we design an adaptive and unified Evict-then-Merge framework that accounts for the sparsity and redundancy of KV tokens across different heads. Additionally, we implement the head-wise parallel compression through a zero-class mechanism to enhance efficiency. Extensive experiments demonstrate our SOTA performance even under extreme compression ratios. EMS consistently achieves the lowest perplexity, improves scores by over 1.28 points across four LLMs on LongBench under a 256 cache budget, and preserves 95% retrieval accuracy with a cache budget less than 2% of the context length in the Needle-in-a-Haystack task.
Problem

Research questions and friction points this paper is trying to address.

Efficient KV cache compression
Overcoming biased token distribution
Preserving head-level information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Global-Local importance score
Evict-then-Merge framework
Head-wise parallel compression