Fast LLM Post-training via Decoupled and Best-of-N Speculation

๐Ÿ“… 2025-11-20
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high computational cost and poor scalability of the rollout phaseโ€”i.e., autoregressive generation over large prompt batchesโ€”in post-training of large language models (LLMs), this paper proposes a dynamic decoupled Best-of-N speculative generation method. Our approach fundamentally decouples draft generation from verification, enabling efficient GPU-parallel execution. It further introduces dynamic selection of an ensemble of lightweight draft models, adaptively optimizing speculative accuracy without additional inference-time compute overhead. Crucially, the method integrates seamlessly into standard training pipelines without requiring architectural or resource modifications. Experiments demonstrate that our method achieves 1.3โ€“1.7ร— speedup over baseline rollout implementations and 1.3โ€“1.5ร— improvement over vanilla speculative decoding, significantly accelerating post-training iterations while preserving generation correctness and fidelity.

Technology Category

Application Category

๐Ÿ“ Abstract
Rollout dominates the training time in large language model (LLM) post-training, where the trained model is used to generate tokens given a batch of prompts. SpecActor achieves fast rollout with speculative decoding that deploys a fast path (e.g., a smaller model) to accelerate the unparallelizable generation, while the correctness is guaranteed by fast parallel verification of the outputs with the original model. SpecActor addresses two foundational challenges in speculative rollout by (1) a emph{dynamic decoupled speculation} execution method that maximizes the GPU computational efficiency to realize speedup for large-batch execution -- a configuration common in training but unfriendly to speculative execution and (2) a emph{dynamic Best-of-N speculation} method that selects and combines different drafting methods according to the rollout progress. It substantially improves the speculation accuracy even when the best drafting method is unknown a priori, meanwhile without requiring adding extra computation resources. {sys} is {1.3--1.7},$ imes$ faster than common post-training baselines, and is {1.3--1.5},$ imes$ faster compared to naively adopting speculative decoding for rollout.
Problem

Research questions and friction points this paper is trying to address.

Accelerates LLM post-training rollout via speculative decoding
Enhances GPU efficiency with dynamic decoupled speculation method
Improves speculation accuracy through dynamic Best-of-N selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic decoupled speculation for GPU efficiency
Dynamic Best-of-N speculation for accuracy
Speculative decoding with parallel verification
๐Ÿ”Ž Similar Papers
No similar papers found.
R
Rongxin Cheng
Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University
K
Kai Zhou
ByteDance Seed
Xingda Wei
Xingda Wei
Shanghai Jiao Tong University
System for AIDistributed systemOperating system
S
Siyuan Liu
Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University
Mingcong Han
Mingcong Han
Shanghai Jiao Tong University
computer system
M
Mingjing Ai
Unaffiliated
Y
Yeju Zhou
ByteDance Seed
B
Baoquan Zhong
ByteDance Seed
Wencong Xiao
Wencong Xiao
ByteDance
Distributed systemMachine learning systemResource management
R
Rong Chen
Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University
H
Haibo Chen
Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University