Taming the Fragility of KV Cache Eviction in LLM Inference

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of KV cache eviction in large language model inference—stemming from reliance on stability assumptions and fragile mean-based importance aggregation—this paper proposes a defensive aggregation strategy. It replaces mean aggregation with a two-step, linear-time defensive aggregation mechanism and introduces a layer-wise dynamic cache budget allocation scheme. The approach enhances the reliability of importance estimation under extreme scenarios without incurring significant computational overhead. Extensive experiments span seven task categories and eighteen datasets. At 20% cache size, our method reduces generation quality degradation by 2.3× and 4.3× compared to the strongest baseline, respectively. These results establish a new benchmark for KV cache optimization, demonstrating substantial improvements in both robustness and efficiency.

Technology Category

Application Category

📝 Abstract
Large language models have revolutionized natural language processing, yet their deployment remains hampered by the substantial memory and runtime overhead of the transformer's Key-Value cache. To mitigate this, recent methods employ a scoring-aggregation framework to evict unimportant cache entries, based on the stability assumption-that a fixed subset of entries remains consistently important during generation. However, prior work has largely focused on refining importance indicators for scoring, while defaulting to mean aggregation due to a faithful trust in the stability assumption. In this work, we argue that this underlying assumption is inherently fragile, making mean aggregation highly vulnerable in extreme cases. To counter this, we propose a simple yet elegant defensive aggregation strategy: a two-step, linear-time approach that controls worst-case risk, thereby defending against extreme cases with negligible computational overhead. Embodying this strategy, we propose a novel cache eviction method, DefensiveKV and its extension, Layer-DefensiveKV, which incorporates layer-wise budget allocation. Across seven task domains (18 datasets), our methods reduce generation quality loss by 2.3x and 4.3x respectively, versus the strongest baseline under a 20% cache size. These results set new performance benchmarks and pioneer a promising direction for optimizing cache eviction against underlying fragility through worst-case risk management. Our code is available at https://github.com/FFY0/DefensiveKV.
Problem

Research questions and friction points this paper is trying to address.

Addressing KV cache eviction fragility in LLM inference
Challenging stability assumption in cache entry importance scoring
Proposing defensive aggregation to control worst-case risk
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defensive aggregation strategy controls worst-case risk
Two-step linear-time approach defends extreme cases
Layer-wise budget allocation optimizes cache eviction
🔎 Similar Papers
No similar papers found.
Y
Yuan Feng
School of Computer Science, University of Science and Technology of China
Haoyu Guo
Haoyu Guo
Shanghai AI Lab
Computer Vision3D Vision
J
JunLin Lv
School of Computer Science, University of Science and Technology of China
S
S. Kevin Zhou
School of Biomedical Engineering, USTC
X
Xike Xie
School of Biomedical Engineering, USTC