Scaling LLM Speculative Decoding: Non-Autoregressive Forecasting in Large-Batch Scenarios

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational overhead, scheduling complexity, and scalability limitations of existing speculative decoding methods—particularly their reliance on large draft trees and costly sequential verification—this paper proposes SpecFormer: a novel Transformer architecture integrating unidirectional and bidirectional attention mechanisms. SpecFormer is the first to incorporate non-autoregressive parallel generation into speculative decoding, eliminating conventional prefix-tree structures and enabling fully parallel draft sequence generation without loss in model accuracy. It supports multi-scale training and inference, improving memory and computational efficiency. Experiments demonstrate consistent speedup across diverse model sizes and large batch settings, achieving substantial gains in inference throughput. SpecFormer establishes a new low-overhead, highly scalable paradigm for efficient LLM deployment.

Technology Category

Application Category

📝 Abstract
Speculative decoding accelerates LLM inference by utilizing otherwise idle computational resources during memory-to-chip data transfer. Current speculative decoding methods typically assume a considerable amount of available computing power, then generate a complex and massive draft tree using a small autoregressive language model to improve overall prediction accuracy. However, methods like batching have been widely applied in mainstream model inference systems as a superior alternative to speculative decoding, as they compress the available idle computing power. Therefore, performing speculative decoding with low verification resources and low scheduling costs has become an important research problem. We believe that more capable models that allow for parallel generation on draft sequences are what we truly need. Recognizing the fundamental nature of draft models to only generate sequences of limited length, we propose SpecFormer, a novel architecture that integrates unidirectional and bidirectional attention mechanisms. SpecFormer combines the autoregressive model's ability to extract information from the entire input sequence with the parallel generation benefits of non-autoregressive models. This design eliminates the reliance on large prefix trees and achieves consistent acceleration, even in large-batch scenarios. Through lossless speculative decoding experiments across models of various scales, we demonstrate that SpecFormer sets a new standard for scaling LLM inference with lower training demands and reduced computational costs.
Problem

Research questions and friction points this paper is trying to address.

Optimizing speculative decoding under limited verification resources and scheduling costs
Enabling parallel draft generation without large prefix tree dependencies
Achieving consistent LLM acceleration in large-batch inference scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates unidirectional and bidirectional attention mechanisms
Combines autoregressive and non-autoregressive generation benefits
Eliminates reliance on large prefix trees for acceleration
🔎 Similar Papers
No similar papers found.