Locret: Enhancing Eviction in Long-Context LLM Inference with Trained Retaining Heads

📅 2024-10-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the KV cache memory explosion and limited support for streaming long-context inference on consumer-grade hardware, this paper proposes the first learnable cache eviction framework compatible with chunked prefill. Our method introduces a causally aware, learnable retention head that dynamically identifies and compresses non-critical KV tokens; designs a lightweight, chunked-prefill–compatible eviction policy; and requires less than one hour of single-GPU fine-tuning. Experiments demonstrate up to 20× KV cache compression on an RTX 4090, with <10% degradation in generation quality. To our knowledge, this is the first approach enabling high-quality inference over 128K+ context lengths on a single GPU, significantly improving practicality and deployment efficiency for long-context LLMs.

Technology Category

Application Category

📝 Abstract
Scaling the input context length of a large language model (LLM) incurs a significant increase in computation cost and memory footprint to maintain the attention key-value (KV) cache. Existing KV cache compression methods suffer from inefficient compression strategies and limited memory reduction effects, making it difficult for LLMs to conduct long-context inference on consumer-grade devices, especially when inferring long-context stream input. Such obstacles prevent consumer-grade devices from supporting more complex applications, creating challenges for the democratization of LLMs. To overcome this, we propose Locret, the first framework to create an eviction policy compatible with chunked prefill. By evaluating the causal importance of KV cache units by learnable retaining heads, Locret enables precise eviction of cache units, facilitating efficient long-context inference. In our extensive empirical studies, Locret outperforms the recent popular and competitive approaches in terms of memory efficiency and generation quality -- Locret achieves up to 20x of KV cache compression ratio within less than 10% performance loss. Furthermore, Locret achieves 128K+ long-context inference on a single NVIDIA 4090 GPU without compromising generation quality and only costs<1 GPU hour of additional training.
Problem

Research questions and friction points this paper is trying to address.

Long Context Processing
Computational Cost
Memory Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Locret
Efficient KV Cache Compression
Long Sequence Processing
🔎 Similar Papers
No similar papers found.
Yuxiang Huang
Yuxiang Huang
Tsinghua University
Efficient AINatural Language ProcessingMachine Learning System
B
Binhang Yuan
Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China.
X
Xu Han
Department of Computer Science and Technology, Institute for Artificial Intelligence, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
Chaojun Xiao
Chaojun Xiao
Postdoctoral Researcher, Tsinghua University
Large Language Model
Z
Zhiyuan Liu
Department of Computer Science and Technology, Institute for Artificial Intelligence, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.