Spiffy: Multiplying Diffusion LLM Acceleration via Lossless Speculative Decoding

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion large language models (dLLMs) suffer from low inference efficiency due to their inherent single-token-per-step denoising mechanism. To address this, we propose a lossless speculative decoding framework that eliminates the need for auxiliary draft models. Our method automatically constructs a directed draft graph to enable multi-token parallel speculation and integrates an offline calibration algorithm to jointly optimize acceptance rate and verification efficiency. Furthermore, we incorporate KV cache reuse and multi-token masked decoding to accelerate parallel verification. Crucially, the framework strictly preserves the original output distribution of the base dLLM. Evaluated across multiple benchmarks, our approach achieves a 2.8–3.1× speedup in base inference latency; when combined with complementary backend optimizations, end-to-end acceleration reaches 7.9×, significantly improving dLLM inference throughput.

Technology Category

Application Category

📝 Abstract
Diffusion LLMs (dLLMs) have recently emerged as a powerful alternative to autoregressive LLMs (AR-LLMs) with the potential to operate at significantly higher token generation rates. However, currently available open-source dLLMs often generate at much lower rates, typically decoding only a single token at every denoising timestep in order to maximize output quality. We present Spiffy, a speculative decoding algorithm that accelerates dLLM inference by $mathbf{2.8{-}3.1 imes}$ while provably preserving the model's output distribution. This work addresses the unique challenges involved in applying ideas from speculative decoding of AR-LLMs to the dLLM setting. Spiffy proposes draft states by leveraging the dLLM's distribution itself in an auto-speculative manner. This approach is efficient and effective, and eliminates the overheads of training and running an independent draft model. To structure the candidate draft states, we propose a novel directed draft graph which is uniquely designed to take advantage of the bidirectional, block-wise nature of dLLM generation and can be verified in parallel by the dLLM. To further optimize the structure of these draft graphs, we introduce an efficient, offline calibration algorithm that procedurally determines high-quality graph configurations. These optimized draft graphs, enabling increased acceptance rates, lead to a significant boost in the overall speedup achieved by the system. Crucially, Spiffy is also complementary to other recent innovations in improving dLLM generation speeds such as KV-caching and multi-token unmasking. We demonstrate that when combined with such parallel decoding algorithms, Spiffy is able to effectively multiply the benefits of these methods leading to total speedups of up to $mathbf{7.9 imes}$.
Problem

Research questions and friction points this paper is trying to address.

Accelerates diffusion LLM inference while preserving output distribution
Applies speculative decoding to diffusion LLMs without training separate models
Optimizes draft graph structures to maximize token acceptance rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lossless speculative decoding algorithm for diffusion LLMs
Auto-speculative draft generation using dLLM's distribution
Directed draft graph optimized via offline calibration
🔎 Similar Papers
No similar papers found.