🤖 AI Summary
To address low token acceptance rates and limited acceleration in large language model speculative decoding—caused by token misalignment between training and inference phases—this paper proposes an alignment-aware speculative decoding framework. First, it introduces a novel token-alignable training mechanism that employs dynamic loss masking to filter out misaligned tokens during gradient computation, thereby preventing their interference with parameter updates. Second, it designs an input-aware lightweight draft model that explicitly aligns draft tokens with target tokens at the feature level. Third, it implements an end-to-end speculative decoding system built upon LLaMA/Vicuna. Evaluated across multiple benchmarks, the framework achieves a 7.2% average increase in accepted draft token length and an 8.4× end-to-end inference speedup, significantly outperforming existing state-of-the-art methods.
📝 Abstract
Speculative decoding accelerates inference in large language models (LLMs) by generating multiple draft tokens simultaneously. However, existing methods often struggle with token misalignment between the training and decoding phases, limiting their performance. To address this, we propose GRIFFIN, a novel framework that incorporates a token-alignable training strategy and a token-alignable draft model to mitigate misalignment. The training strategy employs a loss masking mechanism to exclude highly misaligned tokens during training, preventing them from negatively impacting the draft model's optimization. The token-alignable draft model introduces input tokens to correct inconsistencies in generated features. Experiments on LLaMA-series and Vicuna models demonstrate that GRIFFIN achieves an average acceptance length improvement of over 7% and a speedup ratio exceeding 8%, outperforming current SoTAs as shown in Fig. 1 (a) and (b).