KVCompose: Efficient Structured KV Cache Compression with Composite Tokens

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the memory bottleneck in long-context inference for large language models—caused by linear growth of KV caches with context length and number of layers—this paper proposes an attention-guided, layer-adaptive structured KV cache compression framework. The method leverages attention score aggregation and a global allocation mechanism to perform head-specific token selection and composite token alignment, dynamically allocating per-layer retention budgets while preserving standard tensor layouts and compatibility with existing inference engines. Its key innovation lies in unifying attention-guided token pruning and inter-layer adaptive compression within a regular tensor structure—without requiring custom kernels or disrupting computational flow. Experiments demonstrate that the approach reduces KV cache memory consumption by up to 58%, while maintaining or improving generation quality across diverse long-text tasks—outperforming both structured and semi-structured compression baselines.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) rely on key-value (KV) caches for efficient autoregressive decoding; however, cache size grows linearly with context length and model depth, becoming a major bottleneck in long-context inference. Prior KV cache compression methods either enforce rigid heuristics, disrupt tensor layouts with per-attention-head variability, or require specialized compute kernels. We propose a simple, yet effective, KV cache compression framework based on attention-guided, layer-adaptive composite tokens. Our method aggregates attention scores to estimate token importance, selects head-specific tokens independently, and aligns them into composite tokens that respect the uniform cache structure required by existing inference engines. A global allocation mechanism further adapts retention budgets across layers, assigning more capacity to layers with informative tokens. This approach achieves significant memory reduction while preserving accuracy, consistently outperforming prior structured and semi-structured methods. Crucially, our approach remains fully compatible with standard inference pipelines, offering a practical and scalable solution for efficient long-context LLM deployment.
Problem

Research questions and friction points this paper is trying to address.

Compressing KV cache to reduce memory bottleneck
Maintaining accuracy while enabling long-context inference
Ensuring compatibility with standard inference engines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention-guided composite token compression
Layer-adaptive global allocation mechanism
Maintains standard inference pipeline compatibility
D
Dmitry Akulov
Paris Research Center, Huawei Technologies, Boulogne-Billancourt, France
M
Mohamed Sana
Paris Research Center, Huawei Technologies, Boulogne-Billancourt, France
Antonio De Domenico
Antonio De Domenico
Huawei Technologies
machine learningmobile networks5Gwireless communications
T
Tareq Si Salem
Paris Research Center, Huawei Technologies, Boulogne-Billancourt, France
Nicola Piovesan
Nicola Piovesan
Huawei Technologies
Mobile networksEnergy efficiencyMachine learningLarge Language ModelsGenerative AI
Fadhel Ayed
Fadhel Ayed
Department of Statistics, University of Oxford
StatisticsMachine Learning