🤖 AI Summary
Traditional speculative decoding suffers from overly stringent verification mechanisms that reject a large number of plausible tokens, thereby limiting inference speedup. This work proposes a relaxed speculative decoding framework that dynamically fuses the output distributions of the draft and target models during verification. By introducing a task- and context-aware adaptive weighting mechanism, the method constructs a flexible ensemble verifier that transcends the rigid token-matching constraints of conventional approaches. The proposed technique substantially increases token acceptance rates and overall acceleration ratios while preserving generation quality, achieving superior empirical performance compared to existing standard speculative decoding methods.
📝 Abstract
Speculative decoding is an effective technique for accelerating large language model inference by drafting multiple tokens in parallel. In practice, its speedup is often bottlenecked by a rigid verification step that strictly enforces the accepted token distribution to exactly match the target model. This constraint leads to the rejection of many plausible tokens, lowering the acceptance rate and limiting overall time speedup. To overcome this limitation, we propose Dynamic Verification Relaxed Speculative Decoding (DIVERSED), a relaxed verification framework that improves time efficiency while preserving generation quality. DIVERSED learns an ensemble-based verifier that blends the draft and target model distributions with a task-dependent and context-dependent weight. We provide theoretical justification for our approach and demonstrate empirically that DIVERSED achieves substantially higher inference efficiency compared to standard speculative decoding methods. Code is available at: https://github.com/comeusr/diversed.