π€ AI Summary
Low GPU utilization in large language model (LLM) reinforcement learning (RL) stems primarily from the rollout phase dominating training time and from intra-batch sequence-length imbalance causing computational idleness. To address this, we propose RhymeRLβa system designed to accelerate RL training without compromising model accuracy. Its core innovations are: (1) HistoSpec, a speculative decoding engine that leverages token-distribution similarity across historical rollout sequences to enable efficient speculative generation; and (2) HistoPipe, a two-level scheduling strategy supporting asynchronous rollouts, dynamic history-aware batching, and load balancing. RhymeRL integrates seamlessly with mainstream RL frameworks and requires no modifications to existing training logic. Evaluated across real-world production clusters ranging from tens to thousands of GPUs, RhymeRL achieves a 2.6Γ speedup over state-of-the-art methods while strictly preserving model precision.
π Abstract
With the rapid advancement of large language models (LLMs), reinforcement learning (RL) has emerged as a pivotal methodology for enhancing the reasoning capabilities of LLMs. Unlike traditional pre-training approaches, RL encompasses multiple stages: rollout, reward, and training, which necessitates collaboration among various worker types. However, current RL systems continue to grapple with substantial GPU underutilization, due to two primary factors: (1) The rollout stage dominates the overall RL process due to test-time scaling; (2) Imbalances in rollout lengths (within the same batch) result in GPU bubbles. While prior solutions like asynchronous execution and truncation offer partial relief, they may compromise training accuracy for efficiency.
Our key insight stems from a previously overlooked observation: rollout responses exhibit remarkable similarity across adjacent training epochs. Based on the insight, we introduce RhymeRL, an LLM RL system designed to accelerate RL training with two key innovations. First, to enhance rollout generation, we present HistoSpec, a speculative decoding inference engine that utilizes the similarity of historical rollout token sequences to obtain accurate drafts. Second, to tackle rollout bubbles, we introduce HistoPipe, a two-tier scheduling strategy that leverages the similarity of historical rollout distributions to balance workload among rollout workers. We have evaluated RhymeRL within a real production environment, demonstrating scalability from dozens to thousands of GPUs. Experimental results demonstrate that RhymeRL achieves a 2.6x performance improvement over existing methods, without compromising accuracy or modifying the RL paradigm.