๐ค AI Summary
Speculative Decoding (SD) suffers from high token rejection rates, leading to frequent and inefficient target large language model (LLM) invocations. To address this, we propose Consultant Decoding (CD), a novel inference acceleration framework that performs draft sequence verification using only token-level likelihood scores from the target LLMโeliminating reliance on traditional importance sampling and enabling lightweight, high-confidence single-shot validation. CD establishes a heterogeneous model collaboration paradigm, enabling efficient cooperation between models differing in parameter count by up to two orders of magnitude, while preserving output quality equivalent to the target model (100% fidelity on complex tasks). Experiments demonstrate that CD achieves a 2.5ร speedup in inference latency and reduces target LLM invocation rate to under 10%, significantly outperforming state-of-the-art SD methods.
๐ Abstract
The synergistic mechanism based on Speculative Decoding (SD) has garnered considerable attention as a simple yet effective approach for accelerating the inference of large language models (LLMs). Nonetheless, the high rejection rates require repeated LLMs calls to validate draft tokens, undermining the overall efficiency gain of SD. In this work, we revisit existing verification mechanisms and propose a novel synergetic mechanism Consultant Decoding (CD). Unlike SD, which relies on a metric derived from importance sampling for verification, CD verifies candidate drafts using token-level likelihoods computed solely by the LLM. CD achieves up to a 2.5-fold increase in inference speed compared to the target model, while maintaining comparable generation quality (around 100% of the target model's performance). Interestingly, this is achieved by combining models whose parameter sizes differ by two orders of magnitude. In addition, CD reduces the call frequency of the large target model to below 10%, particularly in more demanding tasks. CD's performance was even found to surpass that of the large target model, which theoretically represents the upper bound for speculative decoding.