Training-Free Loosely Speculative Decoding: Accepting Semantically Correct Drafts Beyond Exact Match

๐Ÿ“… 2025-11-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high inference latency inherent in autoregressive generation by large language models (LLMs), this paper proposes Lax-SPDโ€”a training-free, relaxed speculative decoding method. Unlike conventional speculative decoding (SPD), which requires exact token-level agreement between draft and target models, Lax-SPD leverages the target modelโ€™s intrinsic self-correction capability to assess candidate continuations at the semantic level. It introduces entropy-based gating and a dynamic delay window for robust verification. The method employs a two-tier semantic validation scheme and multi-level acceleration strategies, enabling plug-and-play compatibility across diverse models and domains. Experiments demonstrate that Lax-SPD achieves 2.81ร— and 5.07ร— speedup on Llama-3.1-70B-Instruct and Llama-3.1-405B-Instruct, respectively, while preserving over 99% of the target modelโ€™s accuracy. On out-of-distribution (OOD) tasks, it outperforms EAGLE-3 by 1.62ร— in acceleration.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) achieve strong performance across diverse tasks but suffer from high inference latency due to their autoregressive generation. Speculative Decoding (SPD) mitigates this issue by verifying candidate tokens in parallel from a smaller draft model, yet its strict exact-match verification discards many semantically valid continuations. Moreover, existing training-based SPD methods often suffer from performance degradation on out-of-distribution (OOD) tasks. To this end, we propose Training-Free Loosely Speculative Decoding (FLy), a novel method that loosens the rigid verification criterion by leveraging the target model's self-corrective behavior to judge whether a draft-target mismatch remains semantically valid. FLy introduces a two-tier mechanism: an entropy-level gate that identifies whether the current token allows multiple plausible alternatives or is nearly deterministic, and a token-level deferred window that distinguishes genuine errors from differently worded yet semantically correct variants. To further reduce latency, we design a multi-level acceleration strategy that accelerates not only the target model but also the drafter itself. Owing to its training-free design, FLy composes seamlessly with arbitrary draft-target pairs and generalizes across models and domains without hyperparameter re-tuning. Experiments show that FLy preserves more than 99% of the target model's accuracy while achieving an average 2.81x speedup on Llama-3.1-70B-Instruct and 5.07x speedup on the 405B variant. Notably, on out-of-domain datasets, our method remains highly effective and outperforms the training-based method EAGLE-3 by 1.62x.
Problem

Research questions and friction points this paper is trying to address.

Reduces LLM inference latency by accepting semantically correct draft tokens
Overcomes strict exact-match verification that discards valid continuations
Maintains accuracy while accelerating models without training requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Loosens verification using target model's self-correction
Employs entropy gate and deferred window for semantic validation
Multi-level acceleration strategy for both target and draft models
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jinze Li
Advanced Micro Devices, Inc., Beijing, China
Yixing Xu
Yixing Xu
AMD
machine learningdeep learning
G
Guanchen Li
Advanced Micro Devices, Inc., Beijing, China
S
Shuo Yang
Advanced Micro Devices, Inc., Beijing, China
J
Jinfeng Xu
Advanced Micro Devices, Inc., Beijing, China
X
Xuanwu Yin
Advanced Micro Devices, Inc., Beijing, China
D
Dong Li
Advanced Micro Devices, Inc., Beijing, China
Edith C. H. Ngai
Edith C. H. Ngai
Associate Professor, Dept. of Electrical and Electronic Engineering, The University of Hong Kong
edge intelligenceInternet-of-Thingssmart citiessmart healthsecurity and privacy
Emad Barsoum
Emad Barsoum
AMD, Columbia University
Generative AIFoundation ModelsAgentic AIComputer VisionML Frameworks