๐ค AI Summary
To address the high computational cost and poor scalability of the rollout phaseโi.e., autoregressive generation over large prompt batchesโin post-training of large language models (LLMs), this paper proposes a dynamic decoupled Best-of-N speculative generation method. Our approach fundamentally decouples draft generation from verification, enabling efficient GPU-parallel execution. It further introduces dynamic selection of an ensemble of lightweight draft models, adaptively optimizing speculative accuracy without additional inference-time compute overhead. Crucially, the method integrates seamlessly into standard training pipelines without requiring architectural or resource modifications. Experiments demonstrate that our method achieves 1.3โ1.7ร speedup over baseline rollout implementations and 1.3โ1.5ร improvement over vanilla speculative decoding, significantly accelerating post-training iterations while preserving generation correctness and fidelity.
๐ Abstract
Rollout dominates the training time in large language model (LLM) post-training, where the trained model is used to generate tokens given a batch of prompts. SpecActor achieves fast rollout with speculative decoding that deploys a fast path (e.g., a smaller model) to accelerate the unparallelizable generation, while the correctness is guaranteed by fast parallel verification of the outputs with the original model. SpecActor addresses two foundational challenges in speculative rollout by (1) a emph{dynamic decoupled speculation} execution method that maximizes the GPU computational efficiency to realize speedup for large-batch execution -- a configuration common in training but unfriendly to speculative execution and (2) a emph{dynamic Best-of-N speculation} method that selects and combines different drafting methods according to the rollout progress. It substantially improves the speculation accuracy even when the best drafting method is unknown a priori, meanwhile without requiring adding extra computation resources. {sys} is {1.3--1.7},$ imes$ faster than common post-training baselines, and is {1.3--1.5},$ imes$ faster compared to naively adopting speculative decoding for rollout.