A$^2$ATS: Retrieval-Based KV Cache Reduction via Windowed Rotary Position Embedding and Query-Aware Vector Quantization

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Long-context large language model (LLM) inference faces challenges of excessive KV cache memory consumption and high retrieval latency. To address these, this paper proposes a retrieval-based KV cache compression framework. First, it introduces windowed rotary position embedding (RoPE), a novel technique that decouples long-range positional dependencies. Second, it designs query-aware vector quantization (QAVQ), optimized to approximate attention scores for improved reconstruction fidelity. Third, it establishes a CPU-GPU heterogeneous offloading architecture integrated with top-K dynamic retrieval. The method substantially reduces KV cache footprint and memory access overhead while effectively mitigating performance degradation. Evaluated on mainstream long-context benchmarks, it achieves up to 2.7× throughput improvement with bounded accuracy loss.

Technology Category

Application Category

📝 Abstract
Long context large language models (LLMs) pose significant challenges for efficient serving due to the large memory footprint and high access overhead of KV cache. Retrieval-based KV cache reduction methods can mitigate these challenges, typically by offloading the complete KV cache to CPU and retrieving necessary tokens on demand during inference. However, these methods still suffer from unsatisfactory accuracy degradation and extra retrieval overhead. To address these limitations, this paper proposes A$^2$ATS, a novel retrieval-based KV cache reduction method. A$^2$ATS aims to obtain an accurate approximation of attention scores by applying the vector quantization technique to key states, thereby enabling efficient and precise retrieval of the top-K tokens. First, we propose Windowed Rotary Position Embedding, which decouples the positional dependency from query and key states after position embedding. Then, we propose query-aware vector quantization that optimizes the objective of attention score approximation directly. Finally, we design the heterogeneous inference architecture for KV cache offloading, enabling long context serving with larger batch sizes. Experimental results demonstrate that A$^2$ATS can achieve a lower performance degradation with similar or lower overhead compared to existing methods, thereby increasing long context serving throughput by up to $2.7 imes$.
Problem

Research questions and friction points this paper is trying to address.

Reduce memory footprint in LLMs
Improve KV cache retrieval efficiency
Enhance long context serving throughput
Innovation

Methods, ideas, or system contributions that make the work stand out.

Windowed Rotary Position Embedding
Query-Aware Vector Quantization
Heterogeneous Inference Architecture
🔎 Similar Papers
No similar papers found.
Junhui He
Junhui He
Wuhan University
N
Nan Wang
Alibaba Cloud Computing
R
Rui Xu
Alibaba Cloud Computing, Jinan University
Shangyu Wu
Shangyu Wu
City University of Hong Kong
MLSysAI4DB
P
Peng Zhou
Alibaba Cloud Computing
Q
Qiang Liu
Alibaba Cloud Computing
C
C. Xue
MBZUAI
Qingan Li
Qingan Li
Computer School, Wuhan University
compilationembedded systemsoftware engineering