Mask Tokens as Prophet: Fine-Grained Cache Eviction for Efficient dLLM Inference

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
dLLMs suffer from prohibitively high KV cache memory overhead in bidirectional attention, hindering efficient long-context processing; existing cache eviction strategies—designed for autoregressive models—are incompatible with dLLMs’ parallel decoding. This paper proposes MaskKV, a training-free, fine-grained KV cache eviction framework. It introduces a novel mask-guided query attention scoring mechanism that identifies non-critical prompt tokens per attention head, and integrates inter-layer dynamic cache budget allocation to align with dLLMs’ bidirectional architecture. Evaluated on LLaDA, MaskKV retains only 256 KV pairs (<5% of full cache) while preserving 94% of original accuracy, achieving up to 31× inference speedup on 32k-length sequences. The method significantly enhances long-context efficiency without architectural or training modifications.

Technology Category

Application Category

📝 Abstract
Diffusion large language models (dLLMs) present a promising alternative to dominant autoregressive models (ARMs) by the ability of parallel decoding at the expense of substantial computation and memory costs. Specifically, the cache mechanism for bidirectional attention in dLLMs demands large memory footprint, restricting their ability to handle long contexts under resource-limited settings. Existing cache eviction strategies are designed for ARMs and ignore the unique characteristics of dLLMs, thus leading to unsatisfactory performance. To address these challenges, we introduce MaskKV, a training-free cache eviction framework tailored to dLLMs, focusing on the effect of mask tokens in dLLMs. MaskKV is built on two key innovations: (1) a mask-query guided scoring mechanism that leverages attention weights to identify and evict less critical prompt tokens for each head; (2) an adaptive cache budgeting strategy that improves efficiency by reducing allocation in intermediate layers and concentrating resources on prompt-preferring heads. On LLaDA with MaskKV, compressing the KV cache to only 256 pairs (less than 5% of tokens) retains 94% of the full-cache performance on LongBench and achieves up to 31x acceleration at 32k prompt length. The code is publicly available at: https://github.com/jianuo-huang/MaskKV
Problem

Research questions and friction points this paper is trying to address.

Optimizes cache eviction for diffusion LLMs' bidirectional attention
Reduces memory footprint to handle long contexts efficiently
Improves inference speed while maintaining model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

MaskKV uses mask tokens for cache eviction
It employs mask-query guided scoring mechanism
It implements adaptive cache budgeting strategy
🔎 Similar Papers
No similar papers found.
Jianuo Huang
Jianuo Huang
Huazhong University of Science and Technology
Y
Yaojie Zhang
School of Artificial Intelligence, Shanghai Jiao Tong University
Y
Yicun Yang
School of Artificial Intelligence, Shanghai Jiao Tong University
B
Benhao Huang
Carnegie Mellon University
B
Biqing Qi
Shanghai Artificial Intelligence Laboratory
D
Dongrui Liu
Shanghai Artificial Intelligence Laboratory
Linfeng Zhang
Linfeng Zhang
DP Technology; AI for Science Institute
AI for Sciencemulti-scale modelingmolecular simulationdrug/materials design