RetroInfer: A Vector-Storage Approach for Scalable Long-Context LLM Inference

📅 2025-05-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address GPU memory and bandwidth bottlenecks in long-context LLM inference, this paper proposes WaveKV—a novel paradigm that restructures the KV cache as a vector storage system. Methodologically, it introduces the first Attention-aWare VEctor (Wave) Index and a cooperative Wave Buffer mechanism, integrated with three-stage attention approximation, bounded-precision estimation, segment-wise clustering, and CPU-GPU collaborative cache management. These techniques enable efficient retrieval of critical tokens and overlap computation with data transfer—while preserving full-attention accuracy. Experiments on long-context benchmarks demonstrate that WaveKV achieves 4.5× higher GPU-memory inference throughput versus full attention. When extending the KV cache to CPU memory, it delivers 10.5× speedup over sparse attention baselines—without any accuracy loss.

Technology Category

Application Category

📝 Abstract
The growing context lengths of large language models (LLMs) pose significant challenges for efficient inference, primarily due to GPU memory and bandwidth constraints. We present RetroInfer, a novel system that reconceptualizes the key-value (KV) cache as a vector storage system which exploits the inherent attention sparsity to accelerate long-context LLM inference. At its core is the wave index, an Attention-aWare VEctor index that enables efficient and accurate retrieval of critical tokens through techniques such as tripartite attention approximation, accuracy-bounded attention estimation, and segmented clustering. Complementing this is the wave buffer, which coordinates KV cache placement and overlaps computation and data transfer across GPU and CPU to sustain high throughput. Unlike prior sparsity-based methods that struggle with token selection and hardware coordination, RetroInfer delivers robust performance without compromising model accuracy. Experiments on long-context benchmarks show up to 4.5X speedup over full attention within GPU memory limits and up to 10.5X over sparse attention baselines when KV cache is extended to CPU memory, all while preserving full-attention-level accuracy.
Problem

Research questions and friction points this paper is trying to address.

Addresses GPU memory constraints in long-context LLM inference
Improves KV cache efficiency via attention sparsity exploitation
Enables high-throughput inference without accuracy loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vector-storage KV cache for attention sparsity
Wave index enables efficient token retrieval
Wave buffer optimizes GPU-CPU computation overlap
Y
Yaoqi Chen
Microsoft Research
J
Jinkai Zhang
University of Science and Technology of China
Baotong Lu
Baotong Lu
Microsoft Research
Database SystemsMachine Learning Systems
Qianxi Zhang
Qianxi Zhang
MSRA
database
Chengruidong Zhang
Chengruidong Zhang
Research SDE, Microsoft
AI for System & System for AI
J
Jingjia Luo
Tsinghua University
D
Di Liu
Shanghai Jiao Tong University
Huiqiang Jiang
Huiqiang Jiang
Microsoft Research Asia
Efficient AILLMsMLSys
Q
Qi Chen
J
Jing Liu
Bailu Ding
Bailu Ding
Microsoft Research
DatabaseSystem
X
Xiao Yan
Wuhan University
J
Jiawei Jiang
Wuhan University
C
Chen Chen
Shanghai Jiao Tong University
M
Mingxing Zhang
Tsinghua University
Y
Yuqing Yang
F
Fan Yang
M
Mao Yang