Accelerate Speculative Decoding with Sparse Computation in Verification

📅 2025-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The verification stage of speculative decoding becomes a significant computational bottleneck—especially under long-context and Mixture-of-Experts (MoE) settings—where existing sparsification methods fail to accommodate its multi-draft parallelism and cross-layer dependencies. Method: We systematically identify structural redundancy in the verification stage across three dimensions: attention, feed-forward networks (FFNs), and MoE expert selection. We propose a joint sparsification framework tailored to verification, integrating sparse attention, sparse FFNs, and dynamic MoE expert pruning, augmented by a retrieval reuse mechanism across draft tokens and layers—requiring no additional training. Contribution/Results: Evaluated on summarization, question answering, and mathematical reasoning tasks, our method reduces verification-stage computation by up to 58%, maintains stable token acceptance rates, and achieves superior efficiency–accuracy trade-offs without compromising model performance.

Technology Category

Application Category

📝 Abstract
Speculative decoding accelerates autoregressive language model inference by verifying multiple draft tokens in parallel. However, the verification stage often becomes the dominant computational bottleneck, especially for long-context inputs and mixture-of-experts (MoE) models. Existing sparsification methods are designed primarily for standard token-by-token autoregressive decoding to remove substantial computational redundancy in LLMs. This work systematically adopts different sparse methods on the verification stage of the speculative decoding and identifies structured redundancy across multiple dimensions. Based on these observations, we propose a sparse verification framework that jointly sparsifies attention, FFN, and MoE components during the verification stage to reduce the dominant computation cost. The framework further incorporates an inter-draft token and inter-layer retrieval reuse strategy to further reduce redundant computation without introducing additional training. Extensive experiments across summarization, question answering, and mathematical reasoning datasets demonstrate that the proposed methods achieve favorable efficiency-accuracy trade-offs, while maintaining stable acceptance length.
Problem

Research questions and friction points this paper is trying to address.

Reduces verification stage computational bottleneck in speculative decoding
Sparsifies attention, FFN, and MoE components to cut costs
Maintains efficiency-accuracy trade-off without extra training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparsifies attention, FFN, MoE in verification
Uses inter-draft token and layer reuse
Reduces computation without additional training
🔎 Similar Papers
No similar papers found.
Jikai Wang
Jikai Wang
University of Texas at Dallas
Computer VisionRoboticsMachine Learning
Jianchao Tan
Jianchao Tan
Meituan
LLMAutomated Machine LearningComputer GraphicsComputer Vision
Y
Yuxuan Hu
2Meituan
Jiayu Qin
Jiayu Qin
University at Buffalo
machine learning
Y
Yerui Sun
2Meituan
Y
Yuchen Xie
2Meituan
X
Xunliang Cai
2Meituan
Juntao Li
Juntao Li
Soochow University
Language ModelsText Generation
M
Min Zhang
1Key Laboratory of Data Intelligence and Advanced Computing, Soochow University