RocketKV: Accelerating Long-Context LLM Inference via Two-Stage KV Cache Compression

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address memory bandwidth and capacity bottlenecks induced by KV caching in long-context LLM inference, RocketKV proposes a training-free, two-stage decoupled KV compression method. In the first stage, an enhanced SnapKV++ enables adaptive coarse-grained token eviction. In the second stage, under grouped-query attention (GQA) architecture, joint dimensionality reduction is performed across both head and sequence dimensions, followed by top-k sparse attention for fine-grained sparsification. This work introduces the first decoupled compression paradigm—orthogonally optimizing token selection and value compression—while maintaining full compatibility with mainstream GQA-based models without requiring fine-tuning. Experiments on H100 demonstrate end-to-end decoding speedup of up to 3×, 31% reduction in peak GPU memory usage, and negligible accuracy degradation on long-context benchmarks.

Technology Category

Application Category

📝 Abstract
Transformer-based Large Language Models rely critically on KV cache to efficiently handle extended contexts during the decode phase. Yet, the size of the KV cache grows proportionally with the input length, burdening both memory bandwidth and capacity as decoding progresses. To address this challenge, we present RocketKV, a training-free KV cache compression strategy designed specifically to reduce both memory bandwidth and capacity demand of KV cache during the decode phase. RocketKV contains two consecutive stages. In the first stage, it performs coarse-grain KV cache eviction on the input sequence tokens with SnapKV++, a method improved upon SnapKV by introducing adaptive pooling size and full compatibility with grouped-query attention. In the second stage, it adopts a hybrid attention method to conduct fine-grain top-k sparse attention, approximating the attention scores by leveraging both head and sequence dimensional reductions. Combining these two stages, RocketKV achieves significant KV cache fetching bandwidth and storage savings while maintaining comparable accuracy to full KV cache attention. We show that RocketKV provides end-to-end speedup by up to 3$ imes$ as well as peak memory reduction by up to 31% in the decode phase on an NVIDIA H100 GPU compared to the full KV cache baseline, while achieving negligible accuracy loss on a variety of long-context tasks.
Problem

Research questions and friction points this paper is trying to address.

Reduce KV cache memory demand
Accelerate LLM inference speed
Maintain accuracy in long-context tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage KV cache compression
Coarse-grain KV cache eviction
Fine-grain top-k sparse attention
🔎 Similar Papers
No similar papers found.