G-KV: Decoding-Time KV Cache Eviction with Global Attention

📅 2025-11-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing KV cache compression methods for long-sequence reasoning overlook token-level long-term importance and rely solely on local attention scores, leading to performance degradation. To address this, we propose a global-attention-aware dynamic KV cache eviction mechanism. Our method integrates current local attention scores with historically accumulated attention scores to construct a holistic token importance metric. We further optimize the eviction policy via reinforcement learning and enhance generation robustness under compression through knowledge distillation-based post-training. Experiments demonstrate that our approach significantly reduces memory footprint and computational overhead while preserving generation quality across diverse long-context tasks—including document summarization and code completion—achieving more efficient inference. This work establishes a scalable, attention-informed cache management paradigm for deploying large language models with extended context windows.

Technology Category

Application Category

📝 Abstract
Recent reasoning large language models (LLMs) excel in complex tasks but encounter significant computational and memory challenges due to long sequence lengths. KV cache compression has emerged as an effective approach to greatly enhance the efficiency of reasoning. However, existing methods often focus on prompt compression or token eviction with local attention score, overlooking the long-term importance of tokens. We propose G-KV, a KV cache eviction method that employs a global scoring mechanism, combining local and historical attention scores to more accurately assess token importance. Additionally, we introduce post-training techniques, including reinforcement learning and distillation, to optimize models for compressed KV cache settings. The code of this paper is available on: https://github.com/microsoft/G-KV.
Problem

Research questions and friction points this paper is trying to address.

Reduces KV cache memory in long-sequence LLMs
Improves token importance assessment via global attention
Optimizes models for efficient KV cache compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Global scoring mechanism for KV cache eviction
Combines local and historical attention scores
Uses reinforcement learning and distillation post-training