🤖 AI Summary
Diffusion-based large language models (dLLMs) suffer from inefficient inference due to their reliance on bidirectional attention, which precludes standard KV caching. To address this, we propose a **training-free, dual-adaptive KV caching framework** that enables efficient inference via a two-stage mechanism: (1) fine-grained token importance estimation dynamically determines optimal KV storage positions; and (2) adaptive KV state updates during generation support quasi left-to-right decoding, mitigating overconfidence in late-stage tokens. Our method is fully compatible with existing dLLMs (e.g., LLaDA, Dream), requires no architectural or parametric modifications, and is deployed solely at inference time. Experiments demonstrate consistent improvements in both generation quality and inference speed—up to 2.1× acceleration—across diverse benchmarks. The implementation is publicly available.
📝 Abstract
Diffusion-based large language models (dLLMs), despite their promising performance, still suffer from inferior inference efficiency. This is because dLLMs rely on bidirectional attention and cannot directly benefit from the standard key-value (KV) cache as autoregressive models (ARMs) do. To tackle this issue, we introduce extit{Dual aDaptive Cache} (d$^2$Cache), which is a training-free approximate KV cache framework for accelerating dLLM inference. d$^2$Cache features a two-stage fine-grained selection strategy to identify tokens and adaptively update their KV states at each decoding step, while caching the KV states of the remaining tokens for reuse. Furthermore, d$^2$Cache naturally offers a more reliable decoding alternative, which can enable quasi left-to-right generation and mitigate premature overconfidence in tokens at the end of the sequence. Extensive experimental results on two representative dLLMs (ie, LLaDA and Dream) demonstrate that d$^2$Cache not only achieves substantial inference speedups, but also yields consistent improvements in generation quality. The code is available at https://github.com/Kamichanw/d2Cache.