Lookahead Q-Cache: Achieving More Consistent KV Cache Eviction via Pseudo Query

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In large language model (LLM) inference, the KV cache grows linearly with sequence length, causing prohibitive memory overhead; existing cache eviction methods rely on attention scores from the prefill phase, which misalign with actual decoding queries—especially degrading performance under memory constraints. This paper proposes a dynamic cache eviction framework based on *pseudo-forward queries*: lightweight synthetic queries approximating real decoding queries are generated to construct an observation window better aligned with the decoding stage; attention-based importance is then re-evaluated and dynamic eviction performed. The method is fully compatible with mainstream KV compression and quantization techniques and requires no architectural modifications. Evaluated on LongBench and Needle-in-a-Haystack benchmarks, it consistently outperforms state-of-the-art approaches: under cache constraints, it achieves 1–4-point average gains on LongBench and enables plug-and-play co-optimization with existing efficiency techniques.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) rely on key-value cache (KV cache) to accelerate decoding by reducing redundant computations. However, the KV cache memory usage grows substantially with longer text sequences, posing challenges for efficient deployment. Existing KV cache eviction methods prune tokens using prefilling-stage attention scores, causing inconsistency with actual inference queries, especially under tight memory budgets. In this paper, we propose Lookahead Q-Cache (LAQ), a novel eviction framework that generates low-cost pseudo lookahead queries to better approximate the true decoding-stage queries. By using these lookahead queries as the observation window for importance estimation, LAQ achieves more consistent and accurate KV cache eviction aligned with real inference scenarios. Experimental results on LongBench and Needle-in-a-Haystack benchmarks show that LAQ outperforms existing methods across various budget levels, achieving a 1 $sim$ 4 point improvement on LongBench under limited cache budget. Moreover, LAQ is complementary to existing approaches and can be flexibly combined to yield further improvements.
Problem

Research questions and friction points this paper is trying to address.

KV cache memory grows with long sequences, hindering deployment
Existing eviction methods cause inconsistency with real queries
Propose Lookahead Q-Cache for better eviction accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates pseudo lookahead queries for KV cache
Uses lookahead queries as observation window
Improves consistency in KV cache eviction
🔎 Similar Papers
No similar papers found.
Y
Yixuan Wang
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, China
Shiyu Ji
Shiyu Ji
University of California, Santa Barbara
Information RetrievalPrivacySecurity
Y
Yijun Liu
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, China
Yuzhuang Xu
Yuzhuang Xu
Tsinghua University
Natural Language ProcessingEfficient AIMachine Learning
Y
Yang Xu
Research Center for Social Computing and Interactive Robotics, Harbin Institute of Technology, China
Qingfu Zhu
Qingfu Zhu
Harbin Institute of Technology
NLPCode LLM
Wanxiang Che
Wanxiang Che
Professor of Harbin Institute of Technology
Natural Language Processing