🤖 AI Summary
This work addresses the challenge that existing lightweight draft models in speculative decoding struggle to accurately approximate the output distribution of large language models, resulting in low acceptance rates and limited acceleration. To overcome this, the study introduces the first integration of speculative decoding with an online learning system, establishing a continuous evolution loop of “draft proposal–feedback acquisition–model update.” A theoretical framework based on dynamic regret minimization is proposed, complemented by two key strategies: optimistic online learning, which reuses historical gradients as predictive hints, and online ensemble learning, which dynamically maintains multiple draft models for adaptive optimization. Evaluated across seven benchmarks and three base models, the approach achieves up to a 24% improvement in inference speed, significantly outperforming current state-of-the-art methods.
📝 Abstract
Speculative decoding has emerged as a widely adopted paradigm for accelerating large language model inference, where a lightweight draft model rapidly generates candidate tokens that are then verified in parallel by a larger target model. However, due to limited model capacity, drafts often struggle to approximate the target distribution, resulting in shorter acceptance lengths and diminished speedup. A key yet under-explored observation is that speculative decoding inherently provides verification feedback that quantifies the deviation between the draft and target models at no additional cost. This process naturally forms an iterative "draft commits-feedback provides-draft adapts" evolving loop, which precisely matches the online learning paradigm. Motivated by this connection, we propose OnlineSpec, a unified framework that systematically leverages interactive feedback to continuously evolve draft models. Grounded in dynamic regret minimization, we establish a formal link between online learning performance and speculative system's acceleration rate, and develop novel algorithms via modern online learning techniques, including optimistic online learning that adaptively reuses historical gradients as predictive update hints, and online ensemble learning that dynamically maintains multiple draft models. Our algorithms are equipped with theoretical justifications and improved acceleration rates, achieving up to 24% speedup over seven benchmarks and three foundation models.