HeteroCache: A Dynamic Retrieval Approach to Heterogeneous KV Cache Compression for Long-Context LLM Inference

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant bottleneck posed by the linear memory growth of KV caches in long-context large language model inference, where existing static or coarse-grained dynamic compression methods struggle to balance global information retention with computational efficiency. The authors propose a training-free, dynamic KV cache compression framework that, for the first time, jointly leverages temporal stability across attention heads and intra-layer spatial redundancy to enable fine-grained cache budget allocation. By integrating attention head classification, representative head monitoring, hierarchical storage, and an asynchronous on-demand CPU retrieval mechanism, the approach effectively hides I/O latency. Evaluated across multiple long-context benchmarks, the method achieves state-of-the-art performance, accelerating inference by up to 3× at context lengths of 224K tokens.

Technology Category

Application Category

📝 Abstract
The linear memory growth of the KV cache poses a significant bottleneck for LLM inference in long-context tasks. Existing static compression methods often fail to preserve globally important information, principally because they overlook the attention drift phenomenon where token significance evolves dynamically. Although recent dynamic retrieval approaches attempt to address this issue, they typically suffer from coarse-grained caching strategies and incur high I/O overhead due to frequent data transfers. To overcome these limitations, we propose HeteroCache, a training-free dynamic compression framework. Our method is built on two key insights: attention heads exhibit diverse temporal heterogeneity, and there is significant spatial redundancy among heads within the same layer. Guided by these insights, HeteroCache categorizes heads based on stability and redundancy. Consequently, we apply a fine-grained weighting strategy that allocates larger cache budgets to heads with rapidly shifting attention to capture context changes, thereby addressing the inefficiency of coarse-grained strategies. Furthermore, we employ a hierarchical storage mechanism in which a subset of representative heads monitors attention shift, and trigger an asynchronous, on-demand retrieval of contexts from the CPU, effectively hiding I/O latency. Finally, experiments demonstrate that HeteroCache achieves state-of-the-art performance on multiple long-context benchmarks and accelerates decoding by up to $3\times$ compared to the original model in the 224K context. Our code will be open-source.
Problem

Research questions and friction points this paper is trying to address.

KV cache compression
long-context LLM inference
attention drift
memory bottleneck
I/O overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

KV cache compression
dynamic retrieval
attention heterogeneity
hierarchical storage
long-context LLM inference
🔎 Similar Papers
No similar papers found.
Zhiyuan Shi
Zhiyuan Shi
Researcher, Onfido, London
Computer VisionDeep LearningMachine Learning
Qibo Qiu
Qibo Qiu
Zhejiang University
computer visiondeep learning
F
Feng Xue
The Center for Artificial Intelligence, Geely
Z
Zhonglin Jiang
The Center for Artificial Intelligence, Geely
L
Li Yu
China Mobile (Zhejiang) Research & Innovation Institute
J
Jian Jiang
China Mobile (Zhejiang) Research & Innovation Institute
Xiaofei He
Xiaofei He
Professor of Computer Science, Zhejiang University
machine learningcomputer visiondata mining
Wenxiao Wang
Wenxiao Wang
Zhejiang University
Computer VisionDeep Learning