AhaKV: Adaptive Holistic Attention-Driven KV Cache Eviction for Efficient Inference of Large Language Models

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from substantial KV cache memory overhead during inference. Existing eviction strategies—based on cumulative attention scores—exhibit a positional bias: their expected values decay with token position, leading to excessive retention of early tokens and suboptimal global context utilization. This work theoretically identifies and characterizes this bias for the first time, and proposes an adaptive holographic attention–driven KV cache eviction method. Our contributions are threefold: (1) modeling attention distribution uncertainty via information entropy to enable softmax temperature adaptation; (2) integrating value-vector semantic importance to construct an unbiased, globally aware eviction score; and (3) enabling dynamic KV cache pruning. Under fixed cache budgets, our approach significantly mitigates positional bias, ensuring globally uniform retention of salient tokens. Experiments across multiple benchmark tasks demonstrate state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have significantly advanced the field of Artificial Intelligence. However, their deployment is resource-intensive, not only due to the large number of model parameters but also because the (Key-Value) KV cache consumes a lot of memory during inference. While several works propose reducing the KV cache by evicting the unnecessary tokens, these approaches rely on accumulated attention score as eviction score to quantify the importance of the token. We identify the accumulated attention score is biased and it decreases with the position of the tokens in the mathematical expectation. As a result, the retained tokens concentrate on the initial positions, limiting model's access to global contextual information. To address this issue, we propose Adaptive holistic attention KV (AhaKV), it addresses the bias of the accumulated attention score by adaptively tuning the scale of softmax according the expectation of information entropy of attention scores. To make use of the holistic attention information in self-attention mechanism, AhaKV utilize the information of value vectors, which is overlooked in previous works, to refine the adaptive score. We show theoretically that our method is well suited for bias reduction. We deployed AhaKV on different models with a fixed cache budget. Experiments show that AhaKV successfully mitigates bias and retains crucial tokens across global context and achieve state-of-the-art results against other related work on several benchmark tasks.
Problem

Research questions and friction points this paper is trying to address.

KV cache consumes excessive memory in LLM inference
Existing eviction methods bias towards initial token positions
Biased attention scores limit global context access
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive tuning softmax scale for bias reduction
Utilizing value vectors for holistic attention
State-of-the-art KV cache eviction efficiency
🔎 Similar Papers
No similar papers found.
Y
Yifeng Gu
South China University of Technology
Zicong Jiang
Zicong Jiang
PhD student at Chalmers University of Technology
Communication SystemsOptical fiber communication and sensingMachine learningGenerative AI
J
Jianxiu Jin
South China University of Technology
K
K. Guo
South China University of Technology
Z
Ziyang Zhang
South China University of Technology
Xiangmin Xu
Xiangmin Xu
South China University of Technology