DBudgetKV: Dynamic Budget in KV Cache Compression for Ensuring Optimal Performance

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing KV cache compression methods rely on predefined, fixed budgets, rendering them inflexible for variable-length inputs and diverse tasks—hindering open-domain deployment. This paper introduces a novel dynamic-budget KV compression paradigm, enforcing “zero performance degradation over the full cache” as a hard constraint while adaptively maximizing pruning rates. Its core innovations are an attention-based online importance scoring mechanism and an adaptive termination criterion, enabling real-time, lightweight, budget-free dynamic pruning during inference. The method is model- and task-agnostic, compatible across LLM scales and application scenarios. Evaluated on mainstream LLMs, it achieves an average 25.3% KV cache compression with strictly zero accuracy loss and reduces end-to-end inference latency.

Technology Category

Application Category

📝 Abstract
To alleviate memory burden during inference of large language models (LLMs), numerous studies have focused on compressing the KV cache by exploring aspects such as attention sparsity. However, these techniques often require a pre-defined cache budget; as the optimal budget varies with different input lengths and task types, it limits their practical deployment accepting open-domain instructions. To address this limitation, we propose a new KV cache compression objective: to always ensure the full-cache performance regardless of specific inputs, while maximizing KV cache pruning as much as possible. To achieve this goal, we introduce a novel KV cache compression method dubbed DBudgetKV, which features an attention-based metric to signal when the remaining KV cache is unlikely to match the full-cache performance, then halting the pruning process. Empirical evaluation spanning diverse context lengths, task types, and model sizes suggests that our method achieves lossless KV pruning effectively and robustly, exceeding 25% compression ratio on average. Furthermore, our method is easy to integrate within LLM inference, not only optimizing memory space, but also showing reduced inference time compared to existing methods.
Problem

Research questions and friction points this paper is trying to address.

Dynamic KV cache compression
Optimal performance guarantee
Adaptive pruning mechanism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic KV cache compression
Attention-based pruning metric
Lossless KV pruning integration
🔎 Similar Papers
No similar papers found.