Speculative Diffusion Decoding: Accelerating Language Generation through Diffusion

📅 2024-08-10
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing speculative decoding is constrained by the sequential, token-by-token generation of autoregressive draft models, limiting its acceleration potential. To address this, we introduce discrete diffusion models into the speculative decoding framework for the first time, enabling full-sequence parallel draft generation—replacing incremental token-level drafting—and achieving complete parallelization of both draft and verification stages. Our method integrates discrete diffusion modeling, parallel sequence verification, and joint optimization of language modeling and sampling. On standard generation benchmarks, our approach achieves up to 8.7× speedup over baseline autoregressive inference and up to 2.5× improvement over state-of-the-art speculative decoding methods, while strictly preserving generation quality without degradation. By eliminating the token-level temporal dependency inherent in conventional speculative decoding, our work establishes a novel paradigm for efficient large language model inference.

Technology Category

Application Category

📝 Abstract
Speculative decoding has emerged as a widely adopted method to accelerate large language model inference without sacrificing the quality of the model outputs. While this technique has facilitated notable speed improvements by enabling parallel sequence verification, its efficiency remains inherently limited by the reliance on incremental token generation in existing draft models. To overcome this limitation, this paper proposes an adaptation of speculative decoding which uses discrete diffusion models to generate draft sequences. This allows parallelization of both the drafting and verification steps, providing significant speed-ups to the inference process. Our proposed approach, Speculative Diffusion Decoding (SpecDiff), is validated on standard language generation benchmarks and empirically demonstrated to provide a up to 8.7x speed-up over standard generation processes and up to 2.5x speed-up over existing speculative decoding approaches.
Problem

Research questions and friction points this paper is trying to address.

Speculative Decoding
Large Language Models
Generation Speed Acceleration
Innovation

Methods, ideas, or system contributions that make the work stand out.

SpecDiff
Discrete Diffusion Models
Speculative Decoding
🔎 Similar Papers
No similar papers found.