🤖 AI Summary
To address GPU memory and bandwidth bottlenecks in long-context LLM inference, this paper proposes WaveKV—a novel paradigm that restructures the KV cache as a vector storage system. Methodologically, it introduces the first Attention-aWare VEctor (Wave) Index and a cooperative Wave Buffer mechanism, integrated with three-stage attention approximation, bounded-precision estimation, segment-wise clustering, and CPU-GPU collaborative cache management. These techniques enable efficient retrieval of critical tokens and overlap computation with data transfer—while preserving full-attention accuracy. Experiments on long-context benchmarks demonstrate that WaveKV achieves 4.5× higher GPU-memory inference throughput versus full attention. When extending the KV cache to CPU memory, it delivers 10.5× speedup over sparse attention baselines—without any accuracy loss.
📝 Abstract
The growing context lengths of large language models (LLMs) pose significant challenges for efficient inference, primarily due to GPU memory and bandwidth constraints. We present RetroInfer, a novel system that reconceptualizes the key-value (KV) cache as a vector storage system which exploits the inherent attention sparsity to accelerate long-context LLM inference. At its core is the wave index, an Attention-aWare VEctor index that enables efficient and accurate retrieval of critical tokens through techniques such as tripartite attention approximation, accuracy-bounded attention estimation, and segmented clustering. Complementing this is the wave buffer, which coordinates KV cache placement and overlaps computation and data transfer across GPU and CPU to sustain high throughput. Unlike prior sparsity-based methods that struggle with token selection and hardware coordination, RetroInfer delivers robust performance without compromising model accuracy. Experiments on long-context benchmarks show up to 4.5X speedup over full attention within GPU memory limits and up to 10.5X over sparse attention baselines when KV cache is extended to CPU memory, all while preserving full-attention-level accuracy.