🤖 AI Summary
In speculative decoding, the verification step constitutes a critical inference bottleneck; existing intermediate verification methods suffer from high training overhead, substantial memory consumption, and accuracy degradation due to heuristic approximations.
Method: This paper proposes HiSpec, the first framework to integrate early-exit models into the intermediate verification stage of hierarchical speculative decoding, enabling low-overhead, high-accuracy dynamic early termination. HiSpec further incorporates KV cache reuse, shared hidden-state computation, and periodic target-model verification to minimize verification latency without compromising generation quality.
Contribution/Results: Experiments across multiple benchmarks and LLMs demonstrate that HiSpec achieves an average 1.28× throughput improvement (up to 2.01×), with zero accuracy loss relative to standard speculative decoding.
📝 Abstract
Speculative decoding accelerates LLM inference by using a smaller draft model to speculate tokens that a larger target model verifies. Verification is often the bottleneck (e.g. verification is $4 imes$ slower than token generation when a 3B model speculates for a 70B target model), but most prior works focus only on accelerating drafting. $ extit{``Intermediate"}$ verification reduces verification time by discarding inaccurate draft tokens early, but existing methods incur substantial training overheads in incorporating the intermediate verifier, increase the memory footprint to orchestrate the intermediate verification step, and compromise accuracy by relying on approximate heuristics.
We propose $underline{ extit{Hi}} extit{erarchical }underline{ extit{Spec}} extit{ulative Decoding (HiSpec)}$, a framework for high-throughput speculative decoding that exploits $ extit{early-exit (EE) models}$ for low-overhead intermediate verification. EE models allow tokens to exit early by skipping layer traversal and are explicitly trained so that hidden states at selected layers can be interpreted, making them uniquely suited for intermediate verification without drastically increasing compute and memory overheads. To improve resource-efficiency even further, we design a methodology that enables HiSpec to re-use key-value caches and hidden states between the draft, intermediate verifier, and target models. To maintain accuracy, HiSpec periodically validates the draft tokens accepted by the intermediate verifier against the target model. Our evaluations using various representative benchmarks and models show that HiSpec improves throughput by 1.28$ imes$ on average and by up to 2.01$ imes$ compared to the baseline single-layer speculation without compromising accuracy.