EVICPRESS: Joint KV-Cache Compression and Eviction for Efficient LLM Serving

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high latency caused by KV cache memory explosion in large language model (LLM) inference, this paper proposes a cross-tier storage co-optimization framework integrating compression and eviction. Unlike prior works that optimize compression or eviction in isolation, our approach introduces a unified utility function that jointly models their coupled impact on generation latency and output quality. We design a context-aware adaptive decision policy that preserves sensitive contexts with quality-aware compression while aggressively optimizing non-sensitive ones. Technically, the framework combines INT8/FP16 lossy compression, multi-level storage scheduling, dynamic utility modeling, periodic performance profiling, and a heuristic KV redistribution algorithm. Extensive experiments across 12 datasets and 5 mainstream LLMs demonstrate up to 54.3% reduction in time-to-first-token (TTFT)—a 2.19× speedup—significant end-to-end latency improvement, substantially increased KV cache hit rate, and zero degradation in generation quality.

Technology Category

Application Category

📝 Abstract
Reusing KV cache is essential for high efficiency of Large Language Model (LLM) inference systems. With more LLM users, the KV cache footprint can easily exceed GPU memory capacity, so prior work has proposed to either evict KV cache to lower-tier storage devices, or compress KV cache so that more KV cache can be fit in the fast memory. However, prior work misses an important opportunity: jointly optimizing the eviction and compression decisions across all KV caches to minimize average generation latency without hurting quality. We propose EVICPRESS, a KV-cache management system that applies lossy compression and adaptive eviction to KV cache across multiple storage tiers. Specifically, for each KV cache of a context, EVICPRESS considers the effect of compression and eviction of the KV cache on the average generation quality and delay across all contexts as a whole. To achieve this, EVICPRESS proposes a unified utility function that quantifies the effect of quality and delay of the lossy compression or eviction. To this end, EVICPRESS's profiling module periodically updates the utility function scores on all possible eviction-compression configurations for all contexts and places KV caches using a fast heuristic to rearrange KV caches on all storage tiers, with the goal of maximizing the utility function scores on each storage tier. Compared to the baselines that evict KV cache or compress KV cache, EVICPRESS achieves higher KV-cache hit rates on fast devices, i.e., lower delay, while preserving high generation quality by applying conservative compression to contexts that are sensitive to compression errors. Evaluation on 12 datasets and 5 models demonstrates that EVICPRESS achieves up to 2.19x faster time-to-first-token (TTFT) at equivalent generation quality.
Problem

Research questions and friction points this paper is trying to address.

Jointly optimizes KV-cache eviction and compression decisions
Minimizes average generation latency without quality degradation
Manages KV-caches across multiple storage tiers efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint KV-cache compression and eviction optimization
Unified utility function for quality and delay trade-offs
Periodic profiling and heuristic placement across storage tiers
🔎 Similar Papers
No similar papers found.