Free Draft-and-Verification: Toward Lossless Parallel Decoding for Diffusion Large Language Models

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion Large Language Models (DLLMs) suffer from low inference efficiency, and existing parallel decoding methods compromise model performance. Method: This paper proposes the first lossless parallel decoding framework for DLLMs, which generates multiple candidate sequences in parallel and leverages the model’s inherent bidirectional attention mechanism to verify contextual consistency—ensuring outputs are strictly equivalent to those from standard static sampling, without additional forward passes. Contribution/Results: Our approach breaks the speed–accuracy trade-off bottleneck: on mathematical reasoning tasks, it achieves a 2.8× throughput improvement while sustaining zero performance degradation. Crucially, it is the first method to realize *strictly equivalent* parallel decoding for DLLMs—i.e., preserving exact output distribution and computational equivalence—thereby establishing a new paradigm for efficient DLLM deployment.

Technology Category

Application Category

📝 Abstract
Diffusion Large Language Models (DLLMs) have emerged as a new paradigm of language modeling beyond autoregressive next-token prediction. Thanks to their bidirectional attention mechanism, DLLMs are more capable of capturing the connection of context, and thus show unique advantages in challenges like the famous "reversal curse" or learning under data-constrained scenarios. However, this bidirectional nature also brings an obstacle that DLLMs are not inherently compatible with KV Cache, and consequently, the inference efficiency is not competitive compared with autoregressive models. Taking advantage of their inherent capability of multi-token prediction, existing parallel decoding algorithms can speed up the DLLM inference, but at the cost of non-negligible performance degradation. To overcome this challenge, we introduce Free Draft-and-Verification (Freedave), a novel fast sampling algorithm tailored for DLLMs that achieves lossless parallel decoding. Specifically, we propose a pipeline of parallel-decoded candidate generation and verification, which is guaranteed to reproduce the same sequence generated by static sampling, without introducing extra model forward calls. By applying Freedave, the throughput of DLLMs can be boosted up to $2.8 imes$ without performance degradation on math reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

DLLMs lack KV Cache compatibility reducing inference efficiency
Parallel decoding causes performance degradation in DLLMs
Achieving lossless parallel decoding for diffusion language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Free Draft-and-Verification algorithm for parallel decoding
Pipeline generates and verifies candidates without extra calls
Guarantees lossless performance while boosting inference throughput
🔎 Similar Papers
No similar papers found.
Shutong Wu
Shutong Wu
University of Wisconsin–Madison
Machine Learning
J
Jiawei Zhang
Department of Computer Sciences, University of Wisconsin – Madison