🤖 AI Summary
Existing visual-language models typically employ static tree structures for speculative decoding, which struggle to adapt to varying prediction difficulties across generation steps, thereby limiting both accepted sequence length and acceleration gains. To address this, this work proposes SAGE, a novel framework that dynamically adjusts the speculative tree structure based on output entropy as a confidence metric: deep and narrow trees are constructed under high-confidence conditions to maximize speculation depth, while shallow and wide trees are used under low-confidence conditions to enhance diversity. Integrating adaptive tree construction with a parallel multi-token verification mechanism, SAGE significantly improves inference efficiency. Experiments demonstrate speedups of up to 3.36× and 3.18× on LLaVA-OneVision-72B and Qwen2.5-VL-72B, respectively, without compromising output quality.
📝 Abstract
Speculative decoding has emerged as a promising approach to accelerate inference in vision-language models (VLMs) by enabling parallel verification of multiple draft tokens. However, existing methods rely on static tree structures that remain fixed throughout the decoding process, failing to adapt to the varying prediction difficulty across generation steps. This leads to suboptimal acceptance lengths and limited speedup. In this paper, we propose SAGE, a novel framework that dynamically adjusts the speculation tree structure based on real-time prediction uncertainty. Our key insight is that output entropy serves as a natural confidence indicator with strong temporal correlation across decoding steps. SAGE constructs deeper-narrower trees for high-confidence predictions to maximize speculation depth, and shallower-wider trees for uncertain predictions to diversify exploration. SAGE improves acceptance lengths and achieves faster acceleration compared to static tree baselines. Experiments on multiple benchmarks demonstrate the effectiveness of SAGE: without any loss in output quality, it delivers up to $3.36\times$ decoding speedup for LLaVA-OneVision-72B and $3.18\times$ for Qwen2.5-VL-72B.