Learning to Draft: Adaptive Speculative Decoding with Reinforcement Learning

πŸ“… 2026-03-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses a key limitation in existing speculative decoding methods, which rely on static time allocation or proxy metrics and thus fail to jointly optimize draft generation and target model verification, ultimately constraining real-world throughput. To overcome this, the paper formulates speculative decoding as a reinforcement learning problem and introduces two co-adaptive strategies that dynamically coordinate the draft and verification stages to directly maximize end-to-end decoding throughput. Extensive experiments across five large language models and four diverse tasks demonstrate that the proposed approach achieves speedups of 2.24Γ— to 4.32Γ—, outperforming the current state-of-the-art method, Eagle3, by up to 36.4%.

Technology Category

Application Category

πŸ“ Abstract
Speculative decoding accelerates large language model (LLM) inference by using a small draft model to generate candidate tokens for a larger target model to verify. The efficacy of this technique hinges on the trade-off between the time spent on drafting candidates and verifying them. However, current state-of-the-art methods rely on a static time allocation, while recent dynamic approaches optimize for proxy metrics like acceptance length, often neglecting the true time cost and treating the drafting and verification phases in isolation. To address these limitations, we introduce Learning to Draft (LTD), a novel method that directly optimizes for throughput of each draft-and-verify cycle. We formulate the problem as a reinforcement learning environment and train two co-adaptive policies to dynamically coordinate the draft and verification phases. This encourages the policies to adapt to each other and explicitly maximize decoding efficiency. We conducted extensive evaluations on five diverse LLMs and four distinct tasks. Our results show that LTD achieves speedup ratios ranging from 2.24x to 4.32x, outperforming the state-of-the-art method Eagle3 up to 36.4%.
Problem

Research questions and friction points this paper is trying to address.

speculative decoding
large language model inference
throughput optimization
dynamic coordination
draft-and-verify cycle
Innovation

Methods, ideas, or system contributions that make the work stand out.

speculative decoding
reinforcement learning
adaptive policy
LLM inference acceleration
throughput optimization
πŸ”Ž Similar Papers
2023-12-18Neural Information Processing SystemsCitations: 52
J
Jiebin Zhang
National Key Laboratory for Multimedia Information Processing, Peking University and School of Computer Science, Peking University
Z
Zhenghan Yu
National Key Laboratory for Multimedia Information Processing, Peking University and School of Computer Science, Peking University
Liang Wang
Liang Wang
Microsoft
natural language processingmachine learning
Nan Yang
Nan Yang
Microsoft Research
Natural Language Processing
E
Eugene J. Yu
National Key Laboratory for Multimedia Information Processing, Peking University
Zheng Li
Zheng Li
Peking University
δΊΊε·₯智能、θ‡ͺ焢语言倄理
Yifan Song
Yifan Song
MOE Key Laboratory of Computational Linguistics, Peking University
Language ModelAgent
Dawei Zhu
Dawei Zhu
Peking University
long context modelingagentalignment
Xingxing Zhang
Xingxing Zhang
Microsoft Research
Natural Language ProcessingComputational Linguistics
Furu Wei
Furu Wei
Distinguished Scientist, Microsoft Research
Natural Language ProcessingArtificial IntelligenceGeneral AIGenerative AIMultimodal AI
S
Sujian Li
National Key Laboratory for Multimedia Information Processing, Peking University and School of Computer Science, Peking University