🤖 AI Summary
Autoregressive large language models suffer from high inference latency and low GPU utilization, and existing speculative decoding methods are constrained by sequential draft generation, limiting their acceleration potential. This work proposes DFlash, a novel framework that introduces block diffusion into speculative decoding for the first time. DFlash employs a lightweight block diffusion model to generate multiple candidate tokens in parallel within a single forward pass, conditioned on contextual features extracted from the target model to ensure high generation quality and acceptance rates. By circumventing the sequential bottleneck of conventional autoregressive draft models, DFlash achieves over 6× lossless speedup across diverse models and tasks, reaching up to 2.5× the speed of the current state-of-the-art method, EAGLE-3.
📝 Abstract
Autoregressive large language models (LLMs) deliver strong performance but require inherently sequential decoding, leading to high inference latency and poor GPU utilization. Speculative decoding mitigates this bottleneck by using a fast draft model whose outputs are verified in parallel by the target LLM; however, existing methods still rely on autoregressive drafting, which remains sequential and limits practical speedups. Diffusion LLMs offer a promising alternative by enabling parallel generation, but current diffusion models typically underperform compared with autoregressive models. In this paper, we introduce DFlash, a speculative decoding framework that employs a lightweight block diffusion model for parallel drafting. By generating draft tokens in a single forward pass and conditioning the draft model on context features extracted from the target model, DFlash enables efficient drafting with high-quality outputs and higher acceptance rates. Experiments show that DFlash achieves over 6x lossless acceleration across a range of models and tasks, delivering up to 2.5x higher speedup than the state-of-the-art speculative decoding method EAGLE-3.