🤖 AI Summary
This work addresses the inefficiency of existing speculative decoding methods in low-confidence scenarios, where strict token-by-token verification frequently triggers rollbacks. The authors propose a training-free, domain-agnostic margin-aware verification strategy that dynamically links verification strictness to the decision stability reflected by the target model’s local logits—relaxing rejection criteria only when stringent verification yields marginal gains. Requiring no additional training and compatible with mainstream target-coupled speculative decoding frameworks, the method achieves consistent and significant inference speedups across models ranging from 8B to 235B parameters while preserving generation quality on multiple benchmark tasks.
📝 Abstract
Speculative Decoding (SD) accelerates autoregressive large language model (LLM) inference by decoupling generation and verification. While recent methods improve draft quality by tightly coupling the drafter with the target model, the verification mechanism itself remains largely unchanged, relying on strict token-level rejection sampling. In practice, modern LLMs frequently operate in low-margin regimes where the target model exhibits weak preference among top candidates. In such cases, rejecting plausible runner-up tokens yields negligible information gain while incurring substantial rollback cost, leading to a fundamental inefficiency in verification. We propose Margin-Aware Speculative Verification, a training-free and domain-agnostic verification strategy that adapts to the target model's local decisiveness. Our method conditions verification on decision stability measured directly from the target logits and relaxes rejection only when strict verification provides minimal benefit. Importantly, the approach modifies only the verification rule and is fully compatible with existing target-coupled speculative decoding frameworks. Extensive experiments across model scales ranging from 8B to 235B demonstrate that our method delivers consistent and significant inference speedups over state-of-the-art baselines while preserving generation quality across diverse benchmarks.