🤖 AI Summary
To address the memory explosion of KV caches and the quadratic computational overhead of attention in long-context LLM inference, this paper formalizes KV management as a causal triad of Admission/Selection/Eviction—its first such formulation. We propose Write-Gated KV, the first proactive write-control mechanism leveraging a lightweight utility prediction head to dynamically filter low-value states *before* token writing. Integrated with a dual-cache architecture—comprising a compact global cache and a sliding local cache—and designed for compatibility with FlashAttention and paged KV, our approach significantly enhances hardware efficiency. Evaluated on Llama models, it reduces KV memory usage by 46–57%, accelerates prefill by 3.03–3.45×, and speeds up decoding by 1.89–2.56×, with negligible accuracy degradation.
📝 Abstract
Long-context LLM inference is bottlenecked by the quadratic attention complexity and linear KV cache growth. Prior approaches mitigate this via post-hoc selection or eviction but overlook the root inefficiency: indiscriminate writing to persistent memory. In this paper, we formalize KV cache management as a causal system of three primitives: KV Admission, Selection, and Eviction. We instantiate KV Admission via Write-Gated KV, a lightweight mechanism that learns to predict token utility before it enters the cache. By filtering out low-utility states early to maintain a compact global cache alongside a sliding local cache, Write-Gated KV reduces memory usage by 46-57% and delivers 3.03-3.45$ imes$ prefill and 1.89-2.56$ imes$ decode speedups on Llama model with negligible accuracy loss, all while remaining compatible with FlashAttention and paged-KV systems. These results demonstrate that learning what to write, is a principled and practical recipe for efficient long-context inference. Code is available at https://github.com/EMCLab-Sinica/WG-KV .