🤖 AI Summary
Diffusion language models (DLLMs) enable parallel decoding but rely on bidirectional attention, preventing compatibility with prefix KV caching—key to vLLM and other optimized autoregressive (AR) inference engines—thus failing to outperform AR baselines in practice.
Method: We propose a fully causal-attention parallel decoding framework: (1) a novel masked-position topological reordering mechanism that logically relocates to-be-generated mask tokens physically ahead of observed prefixes, enabling standard prefix KV caching; and (2) streaming token submission with fixed-parallel-load scheduling to eliminate stop-and-wait overhead inherent in block-wise diffusion.
Results: Evaluated on a deployment-level benchmark aligned with vLLM, our method achieves up to 3× average and 10× (low-entropy) speedup over optimized AR engines, while matching the generation quality of strong AR baselines—marking the first demonstration of consistent, significant latency reduction over state-of-the-art AR inference systems in realistic serving environments.
📝 Abstract
Autoregressive (AR) generation is the standard decoding paradigm for Large Language Models (LLMs), but its token-by-token nature limits parallelism at inference time. Diffusion Language Models (DLLMs) offer parallel decoding by recovering multiple masked tokens per step; however, in practice they often fail to translate this parallelism into deployment speed gains over optimized AR engines (e.g., vLLM). A key reason is that many DLLMs rely on bidirectional attention, which breaks standard prefix KV caching and forces repeated contextualization, undermining efficiency. We propose WeDLM, a diffusion decoding framework built entirely on standard causal attention to make parallel generation prefix-cache friendly. The core idea is to let each masked position condition on all currently observed tokens while keeping a strict causal mask, achieved by Topological Reordering that moves observed tokens to the physical prefix while preserving their logical positions. Building on this property, we introduce a streaming decoding procedure that continuously commits confident tokens into a growing left-to-right prefix and maintains a fixed parallel workload, avoiding the stop-and-wait behavior common in block diffusion methods. Experiments show that WeDLM preserves the quality of strong AR backbones while delivering substantial speedups, approaching 3x on challenging reasoning benchmarks and up to 10x in low-entropy generation regimes; critically, our comparisons are against AR baselines served by vLLM under matched deployment settings, demonstrating that diffusion-style decoding can outperform an optimized AR engine in practice.