๐ค AI Summary
This work addresses the high inference latency of large language models and the limitations of existing speculative sampling methods, which rely on static hyperparameters and struggle to adapt across diverse contexts and domains. The paper proposes Re-SpS, the first reinforcement learningโbased dynamic speculative sampling framework, which achieves an optimal trade-off between generation aggressiveness and computational overhead by dynamically adjusting draft tree hyperparameters in real time. Re-SpS introduces a context-aware reinforcement learning mechanism and a multi-step action persistence policy, significantly enhancing inference efficiency. Evaluated on five diverse benchmarks, Re-SpS accelerates inference by up to 5.45ร over the base large language model and outperforms the current state-of-the-art method, EAGLE-3, by 1.12ร, all while preserving output quality without degradation.
๐ Abstract
Inference time latency has remained an open challenge for real world applications of large language models (LLMs). State-of-the-art (SOTA) speculative sampling (SpS) methods for LLMs, like EAGLE-3, use tree-based drafting to explore multiple candidate continuations in parallel. However, the hyperparameters controlling the tree structure are static, which limits flexibility and efficiency across diverse contexts and domains. We introduce Reinforcement learning for Speculative Sampling (Re-SpS), the first reinforcement learning (RL)-based framework for draft tree hyperparameter optimization. Re-SpS dynamically adjusts draft tree hyperparameters in real-time, learning context-aware policies that maximize generation speed by balancing speculative aggression with computational overhead. It leverages efficient state representations from target model hidden states and introduces multi-step action persistence for better context modeling. Evaluation results across five diverse benchmarks demonstrate consistent improvements over the SOTA method EAGLE-3, achieving up to 5.45$\times$ speedup over the backbone LLM and up to 1.12$\times$ speedup compared to EAGLE-3 across five diverse benchmarks, with no loss in output fidelity.