🤖 AI Summary
Large language models (LLMs) suffer from high inference latency, and existing speculative decoding methods rely on a binary draft-verify framework lacking rigorous theoretical foundations.
Method: This paper proposes the first polybasic speculative decoding framework with strict theoretical guarantees. We establish an optimal time-analysis model for multi-model collaborative inference, formally characterizing the quantitative trade-off among model capability, acceptance length, and computational cost. We further design a dynamic acceptance policy and a computation-cost-aware collaborative generation mechanism, enabling heterogeneous model integration and architectural scalability.
Contribution/Results: Our framework achieves 3.31×–4.43× inference speedup across multiple mainstream LLMs while strictly preserving the original output distribution. All code and complete theoretical proofs are publicly released.
📝 Abstract
Inference latency stands as a critical bottleneck in the large-scale deployment of Large Language Models (LLMs). Speculative decoding methods have recently shown promise in accelerating inference without compromising the output distribution. However, existing work typically relies on a dualistic draft-verify framework and lacks rigorous theoretical grounding. In this paper, we introduce a novel emph{polybasic} speculative decoding framework, underpinned by a comprehensive theoretical analysis. Specifically, we prove a fundamental theorem that characterizes the optimal inference time for multi-model speculative decoding systems, shedding light on how to extend beyond the dualistic approach to a more general polybasic paradigm. Through our theoretical investigation of multi-model token generation, we expose and optimize the interplay between model capabilities, acceptance lengths, and overall computational cost. Our framework supports both standalone implementation and integration with existing speculative techniques, leading to accelerated performance in practice. Experimental results across multiple model families demonstrate that our approach yields speedup ratios ranging from $3.31 imes$ to $4.01 imes$ for LLaMA2-Chat 7B, up to $3.87 imes$ for LLaMA3-8B, up to $4.43 imes$ for Vicuna-7B and up to $3.85 imes$ for Qwen2-7B -- all while preserving the original output distribution. We release our theoretical proofs and implementation code to facilitate further investigation into polybasic speculative decoding.