๐ค AI Summary
Reinforcement learning (RL) training of large language models (LLMs) on heterogeneous GPU clusters suffers from low resource utilization due to significant disparities in computational intensity, memory demand, and communication patterns across the three sequential phasesโrollout generation, reward computation, and policy update.
Method: This paper proposes AReaL-Hex, a system featuring a fully asynchronous RL architecture that decouples these three phases, coupled with a two-stage scheduler integrating mixed-integer linear programming (MILP)-based constraint search and graph partitioning to dynamically assign HBM/I/O-intensive and compute-intensive tasks to optimal heterogeneous devices while preserving data freshness.
Contribution/Results: Experiments on 1.5Bโ14B LLMs demonstrate that AReaL-Hex achieves a 1.50ร higher training throughput under identical budget constraints and reduces training cost by 1.46ร at equivalent throughput, significantly improving efficiency on heterogeneous GPU clusters.
๐ Abstract
Maximizing training throughput and cost-efficiency of RL for LLMs is essential to democratize this advanced technique. One promising but challenging approach is to deploy such a computational workflow over heterogeneous GPUs. Unlike conventional large-scale LLM pretraining, RL training generally decomposes into three coupled stages, i.e., rollout generation, reward computation, and policy/value updates, which exhibit markedly different compute intensities, memory footprints, and communication patterns. Recent research shows that fully asynchronous RL training can disaggregate these stages across disjoint hardware pools without sacrificing training stability, creating a great opportunity for real-world heterogeneous deployment. To this end, we present AReaL-Hex, a heterogeneity-aware asynchronous RL training system that effectively schedules how to execute rollout generation and policy model training over heterogeneous GPUs while enforcing data staleness bounds. Concretely, we use a two-phase scheduler: (i) a constrained search with MILP to select per-stage parallelization strategies and workload assignments given a resource budget, and (ii) a graph-partitioning step that allocates heterogeneous GPUs and interconnects to maximize end-to-end throughput. Built atop a fully asynchronous RL architecture, AReaL-Hex maps HBM-I/O-bound generation and compute-bound optimization to more cost-efficient resources and balances their producer-consumer interactions to avoid both idleness and stale rollout trajectories. On the mathematical reasoning task with various model scales (1.5B, 7B, and 14B), compared to homogeneous deployments of state-of-the-art asynchronous RL systems: (i) When maintaining the same total budgets, AReaL-Hex delivers up to 1.50x higher training throughput; (ii) When achieving the same training throughput, AReaL-Hex results in up to 1.46x reduction in training cost.