EvolKV: Evolutionary KV Cache Compression for LLM Inference

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing KV cache compression methods predominantly rely on heuristic strategies—such as uniform cross-layer allocation or static eviction—that ignore inter-layer feature heterogeneity and its coupling with downstream task performance, leading to degraded generalization. This paper proposes the first task-driven, layer-granular adaptive KV cache compression framework. It formulates cache allocation as a multi-objective optimization problem jointly minimizing memory overhead and maximizing downstream task accuracy, and employs evolutionary search to dynamically learn optimal per-layer cache budgets. Experiments across 11 long-context benchmarks demonstrate consistent superiority over state-of-the-art baselines: on GSM8K, it achieves a 7-percentage-point absolute improvement over heuristic approaches. Notably, the method attains *superior* performance to full-cache baselines using only 1.5% of the original KV cache—revealing a previously unrecognized phenomenon of substantial KV cache redundancy.

Technology Category

Application Category

📝 Abstract
Existing key-value (KV) cache compression methods typically rely on heuristics, such as uniform cache allocation across layers or static eviction policies, however, they ignore the critical interplays among layer-specific feature patterns and task performance, which can lead to degraded generalization. In this paper, we propose EvolKV, an adaptive framework for layer-wise, task-driven KV cache compression that jointly optimizes the memory efficiency and task performance. By reformulating cache allocation as a multi-objective optimization problem, EvolKV leverages evolutionary search to dynamically configure layer budgets while directly maximizing downstream performance. Extensive experiments on 11 tasks demonstrate that our approach outperforms all baseline methods across a wide range of KV cache budgets on long-context tasks and surpasses heuristic baselines by up to 7 percentage points on GSM8K. Notably, EvolKV achieves superior performance over the full KV cache setting on code completion while utilizing only 1.5% of the original budget, suggesting the untapped potential in learned compression strategies for KV cache budget allocation.
Problem

Research questions and friction points this paper is trying to address.

Optimizing KV cache compression for memory efficiency and task performance
Addressing heuristic limitations in layer-specific cache allocation
Dynamically configuring layer budgets via evolutionary search optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolutionary search for dynamic layer cache allocation
Multi-objective optimization balancing memory and performance
Task-driven compression achieving superior results with minimal budget
🔎 Similar Papers
No similar papers found.
B
Bohan Yu
School of Advanced Interdisciplinary Sciences, University of Chinese Academy of Sciences, Beijing, China
Yekun Chai
Yekun Chai
Baidu
natural language processingmachine learning