Not-a-Bandit: Provably No-Regret Drafter Selection in Speculative Decoding for LLMs

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of online selection of optimal draft models in speculative decoding. We propose a provably no-regret online learning algorithm that accurately evaluates all candidate draft models—under metrics such as token acceptance rate or expected accepted length—without additional queries to the target model, thereby overcoming the information bottleneck inherent in conventional multi-armed bandit approaches. The algorithm unifies support for diverse decoding structures, including single-draft, multi-draft, and draft-tree configurations, balancing computational efficiency and latency reduction. Theoretically, it guarantees sublinear cumulative regret—scaling sublinearly with the number of target-model queries—and achieves exponential performance gains with respect to the number of candidate models. Extensive experiments across multiple open-source large language models and benchmark datasets demonstrate significant improvements over state-of-the-art baselines—including EAGLE3 and BanditSpec—particularly on long reasoning-chain tasks.

Technology Category

Application Category

📝 Abstract
Speculative decoding is widely used in accelerating large language model (LLM) inference. In this work, we focus on the online draft model selection problem in speculative decoding. We design an algorithm that provably competes with the best draft model in hindsight for each query in terms of either the token acceptance probability or expected acceptance length. In particular, we show that we can accurately evaluate all draft models, instead of only the chosen model without incurring additional queries to the target model, which allows us to improve exponentially over the existing bandit-based approach as the number of draft models increases. Our approach is generically applicable with any speculative decoding methods (single draft, multi-drafts and draft-trees). Moreover, we design system-efficient versions of online learners and demonstrate that the overhead in computation and latency can be substantially reduced. We conduct extensive experiments on open-source LLMs and diverse datasets, demonstrating that our methods substantially outperform the state-of-the-art EAGLE3 and the BanditSpec baseline in a variety of domains where specialized domain-expert drafters are available, especially when long reasoning chains are required.
Problem

Research questions and friction points this paper is trying to address.

Optimizing draft model selection for speculative decoding in LLMs
Provably competing with best draft model for token acceptance
Reducing computation overhead while outperforming existing approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online algorithm selects best draft model per query
Evaluates all draft models without extra target queries
System-efficient learners reduce computation and latency overhead
🔎 Similar Papers
No similar papers found.