🤖 AI Summary
This work addresses the challenge of balancing generation speed and quality in few-step diffusion language model decoding, where conventional confidence-thresholding approaches fall short. The authors propose a training-free speculative decoding framework that leverages the autoregressive property of pretrained diffusion models at block size one, using the model itself as both drafter and verifier. By embedding a lightweight verification mechanism into parallel generation, the method dynamically determines whether to invoke sequence-level correction. This approach achieves the first speculative decoding scheme with zero additional training or inference overhead, integrating autoregressive degradation characteristics, a lightweight routing strategy, and hybrid trajectory control. Experiments show a 4.7× speedup over autoregressive decoding on SDAR, outperforming the best dynamic baseline by 1.57× in speed and +4.5 accuracy points; on LLaDA2.1-Mini, it achieves a 4.4× speedup while maintaining higher accuracy.
📝 Abstract
Block-diffusion language models offer a promising path toward faster-than-autoregressive generation by combining block-wise autoregressive decoding with within-block parallel denoising. However, in the few-step regime needed for practical acceleration, standard confidence-thresholded decoding is often brittle: aggressive thresholds hurt quality, while conservative thresholds require unnecessary denoising steps. Existing approaches that address this issue either require additional training or incur extra test-time compute. We present S2D2, a training-free self-speculative decoding framework for block-diffusion language models. Our key observation is that a block-diffusion model becomes autoregressive when the block size is reduced to one, allowing the same pretrained model to act as both drafter and verifier. S2D2 inserts a speculative verification step into standard block-diffusion decoding and uses lightweight routing policies to decide when verification is worth its cost. This yields a hybrid decoding trajectory in which diffusion proposes tokens in parallel, while the autoregressive mode acts as a local sequence-level critic. Across three mainstream block-diffusion families, S2D2 consistently improves the accuracy-speed tradeoff over strong confidence-thresholding baselines. On SDAR, we observe up to $4.7\times$ speedup over autoregressive decoding, and up to $1.57\times$ over a tuned dynamic decoding baseline while improving accuracy by up to $4.5$ points. On LLaDA2.1-Mini, S2D2 remains complementary to built-in self-correction, including a conservative setting where it is $4.4\times$ faster than the static baseline with slightly higher accuracy.