๐ค AI Summary
To address the high inference latency inherent in autoregressive generation by large language models (LLMs), this paper proposes Lax-SPDโa training-free, relaxed speculative decoding method. Unlike conventional speculative decoding (SPD), which requires exact token-level agreement between draft and target models, Lax-SPD leverages the target modelโs intrinsic self-correction capability to assess candidate continuations at the semantic level. It introduces entropy-based gating and a dynamic delay window for robust verification. The method employs a two-tier semantic validation scheme and multi-level acceleration strategies, enabling plug-and-play compatibility across diverse models and domains. Experiments demonstrate that Lax-SPD achieves 2.81ร and 5.07ร speedup on Llama-3.1-70B-Instruct and Llama-3.1-405B-Instruct, respectively, while preserving over 99% of the target modelโs accuracy. On out-of-distribution (OOD) tasks, it outperforms EAGLE-3 by 1.62ร in acceleration.
๐ Abstract
Large language models (LLMs) achieve strong performance across diverse tasks but suffer from high inference latency due to their autoregressive generation. Speculative Decoding (SPD) mitigates this issue by verifying candidate tokens in parallel from a smaller draft model, yet its strict exact-match verification discards many semantically valid continuations. Moreover, existing training-based SPD methods often suffer from performance degradation on out-of-distribution (OOD) tasks. To this end, we propose Training-Free Loosely Speculative Decoding (FLy), a novel method that loosens the rigid verification criterion by leveraging the target model's self-corrective behavior to judge whether a draft-target mismatch remains semantically valid. FLy introduces a two-tier mechanism: an entropy-level gate that identifies whether the current token allows multiple plausible alternatives or is nearly deterministic, and a token-level deferred window that distinguishes genuine errors from differently worded yet semantically correct variants. To further reduce latency, we design a multi-level acceleration strategy that accelerates not only the target model but also the drafter itself. Owing to its training-free design, FLy composes seamlessly with arbitrary draft-target pairs and generalizes across models and domains without hyperparameter re-tuning. Experiments show that FLy preserves more than 99% of the target model's accuracy while achieving an average 2.81x speedup on Llama-3.1-70B-Instruct and 5.07x speedup on the 405B variant. Notably, on out-of-domain datasets, our method remains highly effective and outperforms the training-based method EAGLE-3 by 1.62x.