π€ AI Summary
Diffusion language models (dLLMs) suffer from slow decoding due to iterative sampling, severely limiting practical deployment. To address this, we propose SchEDβa training-free, model-agnostic early-exit algorithm that enables efficient and stable early termination during dLLM inference. SchED introduces a process-aware smoothed confidence scheduling mechanism, dynamically setting progress-dependent thresholds based on full-span logit margins and prediction entropy. This is the first approach to achieve robust and efficient early stopping for dLLMs, overcoming the failure of prior methods in long-text generation. On instruction-tuned models, SchED achieves 3.8β4.0Γ speedup with 99.8%β100% performance retention; on base models, it attains up to 2.34Γ acceleration while significantly outperforming prior methods in queries-per-second (QPS).
π Abstract
Diffusion large language models (dLLMs) offer a promising alternative to autoregressive models, but their practical utility is severely hampered by slow, iterative sampling. We present SchED, a training-free, model-agnostic early-exit algorithm that aggregates full-span logit margins and halts decoding once a smooth, progress-dependent confidence threshold is met. We evaluated SchED on two dLLM families (Dream and LLaDA), in base and instruction-tuned variants across ten benchmarks spanning downstream tasks including multiple-choice question answering (MCQ), math, long-form QA/summarization, and translation. SchED delivers large, stable accelerations: on instruction-tuned models, it achieves $3.8$-$4.0 imes$ speedups while retaining $99.8$-$100%$ of the baseline score on average. On base models, SchED yields consistent speedup gains with $99.1$-$100%$ performance retention, with up to $2.34 imes$ under more aggressive settings. Using a conservative speed metric that heavily penalizes quality loss (QPS, $gamma{=}4$), we show that SchED is robust and clearly outperforms prior confidence-based early-exit methods, which break down on long-form generation. An entropy analysis of the model's token predictions reveals that instruction tuning speeds up the decay of predictive entropy. By turning genuine confidence stabilization into computational savings, SchED makes dLLM decoding substantially more efficient.