🤖 AI Summary
In large language model (LLM) inference, the KV cache grows linearly with sequence length, causing prohibitive memory overhead; existing cache eviction methods rely on attention scores from the prefill phase, which misalign with actual decoding queries—especially degrading performance under memory constraints. This paper proposes a dynamic cache eviction framework based on *pseudo-forward queries*: lightweight synthetic queries approximating real decoding queries are generated to construct an observation window better aligned with the decoding stage; attention-based importance is then re-evaluated and dynamic eviction performed. The method is fully compatible with mainstream KV compression and quantization techniques and requires no architectural modifications. Evaluated on LongBench and Needle-in-a-Haystack benchmarks, it consistently outperforms state-of-the-art approaches: under cache constraints, it achieves 1–4-point average gains on LongBench and enables plug-and-play co-optimization with existing efficiency techniques.
📝 Abstract
Large language models (LLMs) rely on key-value cache (KV cache) to accelerate decoding by reducing redundant computations. However, the KV cache memory usage grows substantially with longer text sequences, posing challenges for efficient deployment. Existing KV cache eviction methods prune tokens using prefilling-stage attention scores, causing inconsistency with actual inference queries, especially under tight memory budgets. In this paper, we propose Lookahead Q-Cache (LAQ), a novel eviction framework that generates low-cost pseudo lookahead queries to better approximate the true decoding-stage queries. By using these lookahead queries as the observation window for importance estimation, LAQ achieves more consistent and accurate KV cache eviction aligned with real inference scenarios. Experimental results on LongBench and Needle-in-a-Haystack benchmarks show that LAQ outperforms existing methods across various budget levels, achieving a 1 $sim$ 4 point improvement on LongBench under limited cache budget. Moreover, LAQ is complementary to existing approaches and can be flexibly combined to yield further improvements.