🤖 AI Summary
Diffusion Large Language Models (DLLMs) suffer from low inference efficiency, and existing parallel decoding methods compromise model performance. Method: This paper proposes the first lossless parallel decoding framework for DLLMs, which generates multiple candidate sequences in parallel and leverages the model’s inherent bidirectional attention mechanism to verify contextual consistency—ensuring outputs are strictly equivalent to those from standard static sampling, without additional forward passes. Contribution/Results: Our approach breaks the speed–accuracy trade-off bottleneck: on mathematical reasoning tasks, it achieves a 2.8× throughput improvement while sustaining zero performance degradation. Crucially, it is the first method to realize *strictly equivalent* parallel decoding for DLLMs—i.e., preserving exact output distribution and computational equivalence—thereby establishing a new paradigm for efficient DLLM deployment.
📝 Abstract
Diffusion Large Language Models (DLLMs) have emerged as a new paradigm of language modeling beyond autoregressive next-token prediction. Thanks to their bidirectional attention mechanism, DLLMs are more capable of capturing the connection of context, and thus show unique advantages in challenges like the famous "reversal curse" or learning under data-constrained scenarios. However, this bidirectional nature also brings an obstacle that DLLMs are not inherently compatible with KV Cache, and consequently, the inference efficiency is not competitive compared with autoregressive models. Taking advantage of their inherent capability of multi-token prediction, existing parallel decoding algorithms can speed up the DLLM inference, but at the cost of non-negligible performance degradation. To overcome this challenge, we introduce Free Draft-and-Verification (Freedave), a novel fast sampling algorithm tailored for DLLMs that achieves lossless parallel decoding. Specifically, we propose a pipeline of parallel-decoded candidate generation and verification, which is guaranteed to reproduce the same sequence generated by static sampling, without introducing extra model forward calls. By applying Freedave, the throughput of DLLMs can be boosted up to $2.8 imes$ without performance degradation on math reasoning tasks.