Speculative Speculative Decoding

πŸ“… 2026-03-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes Speculative Speculative Decoding (SSD), a novel decoding paradigm that overcomes the sequential bottlenecks inherent in autoregressive generation and existing speculative decoding methods. SSD achieves the first parallelization of speculation and verification by leveraging a draft model to predict verification outcomes and proactively generate subsequent tokens, thereby eliminating the overhead of draft generation. To realize this approach, we introduce the Saguaro algorithm, which systematically addresses three core challenges in SSD through a synergistic framework integrating draft–target model collaboration, verification prediction, and preemptive speculation. Experimental results demonstrate that, on open-source inference engines, SSD achieves up to 2Γ— speedup over optimized speculative decoding and up to 5Γ— speedup compared to conventional autoregressive decoding.

Technology Category

Application Category

πŸ“ Abstract
Autoregressive decoding is bottlenecked by its sequential nature. Speculative decoding has become a standard way to accelerate inference by using a fast draft model to predict upcoming tokens from a slower target model, and then verifying them in parallel with a single target model forward pass. However, speculative decoding itself relies on a sequential dependence between speculation and verification. We introduce speculative speculative decoding (SSD) to parallelize these operations. While a verification is ongoing, the draft model predicts likely verification outcomes and prepares speculations pre-emptively for them. If the actual verification outcome is then in the predicted set, a speculation can be returned immediately, eliminating drafting overhead entirely. We identify three key challenges presented by speculative speculative decoding, and suggest principled methods to solve each. The result is Saguaro, an optimized SSD algorithm. Our implementation is up to 2x faster than optimized speculative decoding baselines and up to 5x faster than autoregressive decoding with open source inference engines.
Problem

Research questions and friction points this paper is trying to address.

autoregressive decoding
speculative decoding
sequential bottleneck
parallelization
inference acceleration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speculative Decoding
Parallel Inference
Autoregressive Models
Speculative Speculative Decoding
Efficient LLM Inference
πŸ”Ž Similar Papers
No similar papers found.