🤖 AI Summary
Existing speculative sampling methods face a fundamental trade-off in accelerating autoregressive decoding: strictly matching the target distribution limits throughput gains, while increasing acceptance rates often distorts the output distribution and degrades generation quality. This work formalizes speculative sampling as a constrained optimization problem and introduces Cactus, a method that maximizes token acceptance rate under a controllable bound on distributional deviation. By integrating an entropy-based heuristic, Cactus theoretically guarantees a balanced trade-off between decoding speed and generation fidelity. Experimental results demonstrate that Cactus achieves substantial throughput improvements across multiple benchmarks while effectively preserving output distribution accuracy and maintaining high-quality generation.
📝 Abstract
Speculative sampling (SpS) has been successful in accelerating the decoding throughput of auto-regressive large language models by leveraging smaller draft models. SpS strictly enforces the generated distribution to match that of the verifier LLM. This is unnecessarily restrictive as slight variations of the verifier's distribution, such as sampling with top-$k$ or temperature, would also be acceptable. Typical acceptance sampling (TAS) alleviates this issue by accepting more tokens using entropy-based heuristics. However, this approach distorts the verifier distribution, potentially degrading output quality when the verifier encodes critical information. In this work, we formalize the speculative sampling algorithm through the lens of constrained optimization. Based on this formulation, we propose Cactus (constrained acceptance speculative sampling), a method that guarantees controlled divergence from the verifier distribution and increasing acceptance rates. Empirical results across a wide range of benchmarks confirm the effectiveness of our approach.