Multi-Drafter Speculative Decoding with Alignment Feedback

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of existing single draft models in diverse scenarios, which hinders efficient speculative decoding for large language models. To overcome this challenge, the paper proposes MetaSD, a novel framework that introduces, for the first time, a collaborative mechanism leveraging multiple heterogeneous draft models within speculative decoding. It formulates draft model selection as a multi-armed bandit problem and integrates alignment feedback with online learning to enable dynamic computational resource allocation. This approach substantially enhances the adaptability and efficiency of speculative decoding, achieving consistent inference acceleration across various tasks while preserving generation quality, and significantly outperforming single-draft-model baselines.
📝 Abstract
Speculative decoding (SD) accelerates large language model (LLM) inference by using a smaller model to draft future tokens, which are then verified by the target LLM. This preserves generation quality by accepting only aligned tokens. However, individual drafters, often trained for specific tasks or domains, exhibit limited effectiveness across diverse applications. To address this, we introduce \textsc{MetaSD}, a unified framework that integrates multiple drafters into the SD process. MetaSD dynamically allocates computational resources to heterogeneous drafters by leveraging alignment feedback and framing drafter selection as a multi-armed bandit problem. Extensive experiments show MetaSD consistently outperforms single-drafter approaches.
Problem

Research questions and friction points this paper is trying to address.

Speculative Decoding
Large Language Models
Multi-Drafter
Alignment Feedback
Inference Acceleration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speculative Decoding
Multi-Drafter
Alignment Feedback
Multi-Armed Bandit
LLM Inference Acceleration
🔎 Similar Papers
No similar papers found.