🤖 AI Summary
Existing speculative decoding (SD) methods employ fixed draft structures, limiting decoding efficiency and hindering adaptability across diverse inference scenarios. To address this, we propose a dynamically adaptive speculative decoding framework. Our method introduces two key innovations: (1) a lightweight draft length predictor (LDLP), the first of its kind, which enables context-aware, adaptive draft length selection without manual threshold tuning; and (2) explicit modeling of variable-length draft structures to support fine-grained, application-specific optimization. Extensive experiments demonstrate that our approach achieves a 1.62× speedup over standard autoregressive decoding, significantly outperforming state-of-the-art fixed-length baselines, while rigorously preserving output quality—i.e., generating identical token sequences under equivalent conditions.
📝 Abstract
Speculative Decoding (SD) is a popular lossless technique for accelerating the inference of Large Language Models (LLMs). We show that the decoding speed of SD frameworks with static draft structures can be significantly improved by incorporating context-aware adaptive draft structures. However, current studies on adaptive draft structures are limited by their performance, modeling approaches, and applicability. In this paper, we introduce AdaEAGLE, the first SD framework that explicitly models adaptive draft structures. AdaEAGLE leverages the Lightweight Draft Length Predictor (LDLP) module to explicitly predict the optimal number of draft tokens during inference to guide the draft model. It achieves comparable speedup results without manual thresholds and allows for deeper, more specialized optimizations. Moreover, together with threshold-based strategies, AdaEAGLE achieves a $1.62 imes$ speedup over the vanilla AR decoding and outperforms fixed-length SotA baseline while maintaining output quality.