🤖 AI Summary
Existing synchronous reinforcement learning (RL) systems suffer from severe workload imbalance during the large language model (LLM) rollout phase, resulting in high tail latency and low GPU utilization. This paper introduces the first online in-context learning system tailored for LLM-RL rollout. Our approach addresses the problem through three core innovations: (1) stepwise rollout with dynamic load balancing to mitigate inter-request computational heterogeneity; (2) context-aware scheduling that leverages similarities in output length and generation patterns across prompts for fine-grained task assignment; and (3) adaptive grouped speculative decoding to improve batch efficiency. Evaluated under realistic RL training workloads, our system achieves 74–97% higher end-to-end rollout throughput and reduces P99 latency by 75–93%, significantly accelerating RL training iteration cycles.
📝 Abstract
Reinforcement Learning (RL) has become critical for advancing modern Large Language Models (LLMs), yet existing synchronous RL systems face severe performance bottlenecks. The rollout phase, which dominates end-to-end iteration time, suffers from substantial long-tail latency and poor resource utilization due to inherent workload imbalance. We present Seer, a novel online context learning system that addresses these challenges by exploiting previously overlooked similarities in output lengths and generation patterns among requests sharing the same prompt. Seer introduces three key techniques: divided rollout for dynamic load balancing, context-aware scheduling, and adaptive grouped speculative decoding. Together, these mechanisms substantially reduce long-tail latency and improve resource efficiency during rollout. Evaluations on production-grade RL workloads demonstrate that Seer improves end-to-end rollout throughput by 74% to 97% and reduces long-tail latency by 75% to 93% compared to state-of-the-art synchronous RL systems, significantly accelerating RL training iterations.