π€ AI Summary
This work addresses a key limitation in existing speculative decoding methods, which rely on static time allocation or proxy metrics and thus fail to jointly optimize draft generation and target model verification, ultimately constraining real-world throughput. To overcome this, the paper formulates speculative decoding as a reinforcement learning problem and introduces two co-adaptive strategies that dynamically coordinate the draft and verification stages to directly maximize end-to-end decoding throughput. Extensive experiments across five large language models and four diverse tasks demonstrate that the proposed approach achieves speedups of 2.24Γ to 4.32Γ, outperforming the current state-of-the-art method, Eagle3, by up to 36.4%.
π Abstract
Speculative decoding accelerates large language model (LLM) inference by using a small draft model to generate candidate tokens for a larger target model to verify. The efficacy of this technique hinges on the trade-off between the time spent on drafting candidates and verifying them. However, current state-of-the-art methods rely on a static time allocation, while recent dynamic approaches optimize for proxy metrics like acceptance length, often neglecting the true time cost and treating the drafting and verification phases in isolation. To address these limitations, we introduce Learning to Draft (LTD), a novel method that directly optimizes for throughput of each draft-and-verify cycle. We formulate the problem as a reinforcement learning environment and train two co-adaptive policies to dynamically coordinate the draft and verification phases. This encourages the policies to adapt to each other and explicitly maximize decoding efficiency. We conducted extensive evaluations on five diverse LLMs and four distinct tasks. Our results show that LTD achieves speedup ratios ranging from 2.24x to 4.32x, outperforming the state-of-the-art method Eagle3 up to 36.4%.