🤖 AI Summary
To address redundant KV cache computation and high cross-layer (DRAM/SSD) loading latency caused by context reuse in LLM inference, this paper proposes the first dynamic, quality-controllable KV cache compression and placement framework. Our method jointly optimizes compression selection (algorithm and rate) and storage tier assignment for each KV entry based on real-time, multi-dimensional features—including access frequency, temporal locality, and semantic importance—enabling joint optimization of DRAM hit rate, inference latency, and generation quality. Compared to static compression baselines, our approach achieves 1.43–2.4× lower latency at equivalent generation quality, or improves BLEU/ROUGE scores by 6%–55% under identical latency constraints. The core innovation lies in a fine-grained, adaptive, and quality-aware KV cache management mechanism that tightly couples lossy compression with hierarchical storage placement.
📝 Abstract
Large language model (LLM) applications often reuse previously processed context, such as chat history and documents, which introduces significant redundant computation. Existing LLM serving systems address such redundant computation by storing the KV caches of processed context and loading the corresponding KV cache when a new request reuses the context. Further, as these LLM applications scale, the total size of KV caches becomes excessively large and requires both DRAM and SSD for full storage.
However, prior work that stores KV caches in DRAM and SSD suffers from high loading delays, as most KV cache hits come from SSD, which is slow to load. To increase the KV cache hit rate on DRAM, we identify lossy KV cache compression as a promising approach. We design a lossy compression system that decides the compression algorithm, compression rate and device placement for each KV cache entry to maximise DRAM hits and minimise loading delay without significantly degrading generation quality. Compared to various static compression baselines across three tasks, our system AdaptCache achieves 1.43--2.4 x delay savings at the same quality and 6--55% quality improvements at the same delay.