🤖 AI Summary
Diffusion language models (DLMs), owing to their non-autoregressive architecture and bidirectional attention, cannot leverage conventional KV-caching for inference acceleration. Method: This paper introduces dKV-Cache—a training-agnostic, KV-cache-inspired mechanism tailored to DLMs—featuring delayed caching, conditional key-value management, and stepwise updates to align with their iterative denoising process. We propose two complementary variants: dKV-Cache-Decode (near-lossless) and dKV-Cache-Greedy (high-speed). We further identify, for the first time, insufficient context utilization during DLM inference. Contribution/Results: As a plug-and-play solution requiring no fine-tuning, dKV-Cache achieves 2–10× inference speedup across diverse tasks—including language understanding, mathematical reasoning, and code generation—while preserving model performance. It substantially narrows the practical gap between DLMs and autoregressive models.
📝 Abstract
Diffusion Language Models (DLMs) have been seen as a promising competitor for autoregressive language models. However, diffusion language models have long been constrained by slow inference. A core challenge is that their non-autoregressive architecture and bidirectional attention preclude the key-value cache that accelerates decoding. We address this bottleneck by proposing a KV-cache-like mechanism, delayed KV-Cache, for the denoising process of DLMs. Our approach is motivated by the observation that different tokens have distinct representation dynamics throughout the diffusion process. Accordingly, we propose a delayed and conditioned caching strategy for key and value states. We design two complementary variants to cache key and value step-by-step: (1) dKV-Cache-Decode, which provides almost lossless acceleration, and even improves performance on long sequences, suggesting that existing DLMs may under-utilise contextual information during inference. (2) dKV-Cache-Greedy, which has aggressive caching with reduced lifespan, achieving higher speed-ups with quadratic time complexity at the cost of some performance degradation. dKV-Cache, in final, achieves from 2-10x speedup in inference, largely narrowing the gap between ARs and DLMs. We evaluate our dKV-Cache on several benchmarks, delivering acceleration across general language understanding, mathematical, and code-generation benchmarks. Experiments demonstrate that cache can also be used in DLMs, even in a training-free manner from current DLMs.