🤖 AI Summary
Fixed parallelization strategies in RLHF training fail to adapt to heterogeneous computational workloads across stages (e.g., PPO optimization, reward modeling, critic updates), causing severe efficiency bottlenecks. Method: We propose DynaRLHF, a novel RLHF training framework enabling dynamic parameter reallocation across GPU clusters. Its core innovations include fine-grained execution plan modeling, a lightweight runtime performance estimation algorithm, and a customized parallelism strategy search mechanism—collectively enabling adaptive redistribution of LLM parameters throughout the RLHF pipeline. Contribution/Results: Evaluated on a 128-GPU cluster training the 70B LLaMA model, DynaRLHF achieves a 3.58× speedup over baseline approaches. In long-context scenarios, it improves throughput by 81% on average compared to Megatron-LM’s heuristic parallelism, significantly enhancing end-to-end RLHF training efficiency and GPU resource utilization.
📝 Abstract
Reinforcement Learning from Human Feedback (RLHF) is a pivotal technique for empowering large language model (LLM) applications. Compared with the supervised training process of LLMs, the RLHF training process is much more sophisticated, requiring a diverse range of computation workloads with intricate dependencies between multiple LLM instances. Therefore, simply adopting the fixed parallelization strategies from supervised training for LLMs can be insufficient for RLHF and result in low training efficiency. To overcome this limitation, we propose a novel technique named parameter ReaLlocation, which dynamically adapts the parallelization strategies for different workloads during training by redistributing LLM parameters across the training cluster. Building upon this idea, we introduce ReaL, a pioneering system for efficient RLHF training. ReaL introduces the concept of an execution plan, which defines a fine-grained resource allocation and parallelization strategy particularly designed for RLHF training. Based on this concept, ReaL employs a tailored search algorithm with a lightweight run-time estimator to automatically discover an efficient execution plan for an instance of RLHF experiment. Subsequently, the runtime engine deploys the selected plan by effectively parallelizing computations and redistributing parameters. We evaluate ReaL on the LLaMA models with up to 70 billion parameters and 128 GPUs. The experimental results demonstrate that ReaL achieves speedups of up to $3.58 imes$ compared to baseline methods. Furthermore, the execution plans generated by ReaL exhibit an average of $81%$ performance improvement over heuristic approaches based on Megatron-LM in the long-context scenario. The source code of ReaL is publicly available at https://github.com/openpsi-project/ReaLHF .