Judge Decoding: Faster Speculative Sampling Requires Going Beyond Model Alignment

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Speculative decoding in large language models (LLMs) suffers from rigid token-level alignment during draft verification, rejecting semantically valid but lexically divergent drafts. Method: This paper proposes Judge Decoding—the first integration of the LLM-as-a-judge paradigm into speculative decoding’s verification stage—employing a lightweight judgment head fine-tuned on Llama-3.1 embeddings to assess draft continuations based on semantic plausibility rather than literal token matching. Contribution/Results: The approach overcomes traditional alignment bottlenecks, enabling acceptance of non-aligned yet semantically correct drafts. It achieves a 9× inference speedup on Llama-405B and delivers 129 tokens/s on 8×H100 using 8B/405B-Judge variants, while preserving generation quality at baseline levels and maintaining full compatibility with standard inference frameworks.

Technology Category

Application Category

📝 Abstract
The performance of large language models (LLMs) is closely linked to their underlying size, leading to ever-growing networks and hence slower inference. Speculative decoding has been proposed as a technique to accelerate autoregressive generation, leveraging a fast draft model to propose candidate tokens, which are then verified in parallel based on their likelihood under the target model. While this approach guarantees to reproduce the target output, it incurs a substantial penalty: many high-quality draft tokens are rejected, even when they represent objectively valid continuations. Indeed, we show that even powerful draft models such as GPT-4o, as well as human text cannot achieve high acceptance rates under the standard verification scheme. This severely limits the speedup potential of current speculative decoding methods, as an early rejection becomes overwhelmingly likely when solely relying on alignment of draft and target. We thus ask the following question: Can we adapt verification to recognize correct, but non-aligned replies? To this end, we draw inspiration from the LLM-as-a-judge framework, which demonstrated that LLMs are able to rate answers in a versatile way. We carefully design a dataset to elicit the same capability in the target model by training a compact module on top of the embeddings to produce ``judgements"of the current continuation. We showcase our strategy on the Llama-3.1 family, where our 8b/405B-Judge achieves a speedup of 9x over Llama-405B, while maintaining its quality on a large range of benchmarks. These benefits remain present even in optimized inference frameworks, where our method reaches up to 141 tokens/s for 8B/70B-Judge and 129 tokens/s for 8B/405B on 2 and 8 H100s respectively.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Processing Speed
Accuracy in Contextual Understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Model Optimization
Intelligent Guess Evaluation
Speed-Quality Tradeoff Improvement
🔎 Similar Papers
No similar papers found.