🤖 AI Summary
dLLMs suffer from prohibitively high KV cache memory overhead in bidirectional attention, hindering efficient long-context processing; existing cache eviction strategies—designed for autoregressive models—are incompatible with dLLMs’ parallel decoding. This paper proposes MaskKV, a training-free, fine-grained KV cache eviction framework. It introduces a novel mask-guided query attention scoring mechanism that identifies non-critical prompt tokens per attention head, and integrates inter-layer dynamic cache budget allocation to align with dLLMs’ bidirectional architecture. Evaluated on LLaDA, MaskKV retains only 256 KV pairs (<5% of full cache) while preserving 94% of original accuracy, achieving up to 31× inference speedup on 32k-length sequences. The method significantly enhances long-context efficiency without architectural or training modifications.
📝 Abstract
Diffusion large language models (dLLMs) present a promising alternative to dominant autoregressive models (ARMs) by the ability of parallel decoding at the expense of substantial computation and memory costs. Specifically, the cache mechanism for bidirectional attention in dLLMs demands large memory footprint, restricting their ability to handle long contexts under resource-limited settings. Existing cache eviction strategies are designed for ARMs and ignore the unique characteristics of dLLMs, thus leading to unsatisfactory performance. To address these challenges, we introduce MaskKV, a training-free cache eviction framework tailored to dLLMs, focusing on the effect of mask tokens in dLLMs. MaskKV is built on two key innovations: (1) a mask-query guided scoring mechanism that leverages attention weights to identify and evict less critical prompt tokens for each head; (2) an adaptive cache budgeting strategy that improves efficiency by reducing allocation in intermediate layers and concentrating resources on prompt-preferring heads. On LLaDA with MaskKV, compressing the KV cache to only 256 pairs (less than 5% of tokens) retains 94% of the full-cache performance on LongBench and achieves up to 31x acceleration at 32k prompt length. The code is publicly available at: https://github.com/jianuo-huang/MaskKV