ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference

📅 2024-10-28
🏛️ arXiv.org
📈 Citations: 14
Influential: 0
📄 PDF
🤖 AI Summary
To address the high KV cache memory overhead and limited throughput in long-context LLM inference, this paper proposes “ShadowKV,” a novel architecture that jointly optimizes GPU memory usage via low-rank compression of key caches and selective offloading of value caches. It introduces the first dynamic sparse selection mechanism, enabling online reconstruction of offloaded sparse KV entries directly on GPU—eliminating CPU offloading-induced decoding latency. By integrating low-rank matrix decomposition, GPU memory-aware offloading, and precise sparse reconstruction, ShadowKV achieves up to 3.04× higher throughput and 6× larger batch sizes on A100 GPUs, with no degradation in generation quality. Notably, it even surpasses the ideal “infinite GPU memory” baseline in end-to-end performance.

Technology Category

Application Category

📝 Abstract
With the widespread deployment of long-context large language models (LLMs), there has been a growing demand for efficient support of high-throughput inference. However, as the key-value (KV) cache expands with the sequence length, the increasing memory footprint and the need to access it for each token generation both result in low throughput when serving long-context LLMs. While various dynamic sparse attention methods have been proposed to speed up inference while maintaining generation quality, they either fail to sufficiently reduce GPU memory consumption or introduce significant decoding latency by offloading the KV cache to the CPU. We present ShadowKV, a high-throughput long-context LLM inference system that stores the low-rank key cache and offloads the value cache to reduce the memory footprint for larger batch sizes and longer sequences. To minimize decoding latency, ShadowKV employs an accurate KV selection strategy that reconstructs minimal sparse KV pairs on-the-fly. By evaluating ShadowKV on a broad range of benchmarks, including RULER, LongBench, and Needle In A Haystack, and models like Llama-3.1-8B, Llama-3-8B-1M, GLM-4-9B-1M, Yi-9B-200K, Phi-3-Mini-128K, and Qwen2-7B-128K, we demonstrate that it can support up to 6$ imes$ larger batch sizes and boost throughput by up to 3.04$ imes$ on an A100 GPU without sacrificing accuracy, even surpassing the performance achievable with infinite batch size under the assumption of infinite GPU memory. The code is available at https://github.com/bytedance/ShadowKV.
Problem

Research questions and friction points this paper is trying to address.

Reduces KV cache memory footprint for long-context LLMs
Minimizes decoding latency with accurate KV selection
Boosts throughput without sacrificing inference accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stores low-rank key cache for memory efficiency
Offloads value cache to reduce memory footprint
Employs accurate KV selection for minimal latency
🔎 Similar Papers
No similar papers found.