🤖 AI Summary
To address the memory bottleneck in long-context inference for large language models—caused by linear growth of KV caches with context length and number of layers—this paper proposes an attention-guided, layer-adaptive structured KV cache compression framework. The method leverages attention score aggregation and a global allocation mechanism to perform head-specific token selection and composite token alignment, dynamically allocating per-layer retention budgets while preserving standard tensor layouts and compatibility with existing inference engines. Its key innovation lies in unifying attention-guided token pruning and inter-layer adaptive compression within a regular tensor structure—without requiring custom kernels or disrupting computational flow. Experiments demonstrate that the approach reduces KV cache memory consumption by up to 58%, while maintaining or improving generation quality across diverse long-text tasks—outperforming both structured and semi-structured compression baselines.
📝 Abstract
Large language models (LLMs) rely on key-value (KV) caches for efficient autoregressive decoding; however, cache size grows linearly with context length and model depth, becoming a major bottleneck in long-context inference. Prior KV cache compression methods either enforce rigid heuristics, disrupt tensor layouts with per-attention-head variability, or require specialized compute kernels.
We propose a simple, yet effective, KV cache compression framework based on attention-guided, layer-adaptive composite tokens. Our method aggregates attention scores to estimate token importance, selects head-specific tokens independently, and aligns them into composite tokens that respect the uniform cache structure required by existing inference engines. A global allocation mechanism further adapts retention budgets across layers, assigning more capacity to layers with informative tokens. This approach achieves significant memory reduction while preserving accuracy, consistently outperforming prior structured and semi-structured methods. Crucially, our approach remains fully compatible with standard inference pipelines, offering a practical and scalable solution for efficient long-context LLM deployment.