Fast-Decoding Diffusion Language Models via Progress-Aware Confidence Schedules

πŸ“… 2025-12-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Diffusion language models (dLLMs) suffer from slow decoding due to iterative sampling, severely limiting practical deployment. To address this, we propose SchEDβ€”a training-free, model-agnostic early-exit algorithm that enables efficient and stable early termination during dLLM inference. SchED introduces a process-aware smoothed confidence scheduling mechanism, dynamically setting progress-dependent thresholds based on full-span logit margins and prediction entropy. This is the first approach to achieve robust and efficient early stopping for dLLMs, overcoming the failure of prior methods in long-text generation. On instruction-tuned models, SchED achieves 3.8–4.0Γ— speedup with 99.8%–100% performance retention; on base models, it attains up to 2.34Γ— acceleration while significantly outperforming prior methods in queries-per-second (QPS).

Technology Category

Application Category

πŸ“ Abstract
Diffusion large language models (dLLMs) offer a promising alternative to autoregressive models, but their practical utility is severely hampered by slow, iterative sampling. We present SchED, a training-free, model-agnostic early-exit algorithm that aggregates full-span logit margins and halts decoding once a smooth, progress-dependent confidence threshold is met. We evaluated SchED on two dLLM families (Dream and LLaDA), in base and instruction-tuned variants across ten benchmarks spanning downstream tasks including multiple-choice question answering (MCQ), math, long-form QA/summarization, and translation. SchED delivers large, stable accelerations: on instruction-tuned models, it achieves $3.8$-$4.0 imes$ speedups while retaining $99.8$-$100%$ of the baseline score on average. On base models, SchED yields consistent speedup gains with $99.1$-$100%$ performance retention, with up to $2.34 imes$ under more aggressive settings. Using a conservative speed metric that heavily penalizes quality loss (QPS, $gamma{=}4$), we show that SchED is robust and clearly outperforms prior confidence-based early-exit methods, which break down on long-form generation. An entropy analysis of the model's token predictions reveals that instruction tuning speeds up the decay of predictive entropy. By turning genuine confidence stabilization into computational savings, SchED makes dLLM decoding substantially more efficient.
Problem

Research questions and friction points this paper is trying to address.

Accelerates diffusion language models' slow iterative sampling
Enables early-exit decoding via progress-aware confidence schedules
Maintains performance across tasks while achieving speedups
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free early-exit algorithm for diffusion language models
Progress-aware confidence schedules to halt decoding early
Aggregates full-span logit margins for stable acceleration
πŸ”Ž Similar Papers
No similar papers found.
A
Amr Mohamed
MBZUAI
Y
Yang Zhang
Ecole Polytechnique
M
M. Vazirgiannis
MBZUAI
Guokan Shang
Guokan Shang
MBZUAI-IFM Paris Lab