HetRL: Efficient Reinforcement Learning for LLMs in Heterogeneous Environments

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low training efficiency of large language models (LLMs) in heterogeneous GPU environments—characterized by cross-generation, cross-region, and network heterogeneity—hampers practical reinforcement learning (RL) fine-tuning. Method: We propose the first constraint-aware joint optimization framework tailored for heterogeneous RL training, featuring a multi-level decomposition search algorithm and a Successive Halving–based dynamic budget allocation mechanism to co-schedule computation, communication, and task dependencies. The system is deeply optimized for the LLM RLHF pipeline. Results: Evaluated over 20,000 GPU-hours, it achieves 1.17×–9.17× (average 3.17×) higher throughput than state-of-the-art systems and significantly improves utilization of legacy and mid-tier GPU resources. Our core contributions are a unified modeling paradigm for heterogeneous RL training and a hierarchical, adaptive scheduling methodology.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) continue to scale and new GPUs are released even more frequently, there is an increasing demand for LLM post-training in heterogeneous environments to fully leverage underutilized mid-range or previous-generation GPUs across regions and alleviate the shortage of homogeneous high-end GPUs within a single region. However, achieving high-performance reinforcement learning (RL) training for LLMs on such computing resources remains challenging because the workflow involves multiple models and tasks with complex computation and data dependencies. In this paper, we present HetRL, a distributed system for efficient RL training in infrastructures with heterogeneous GPUs and networks. HetRL formulates the scheduling of RL training in heterogeneous environments as a constrained joint optimization problem and introduces a novel scheduling algorithm that (1) decomposes the complex search space with a multi-level search framework; and (2) allocates the search budget via successive halving. Our extensive evaluation, consuming 20,000 GPU-hours, shows that HetRL delivers up to 9.17x the throughput of state-of-the-art systems, and 3.17x on average, under various workloads and settings.
Problem

Research questions and friction points this paper is trying to address.

Efficient RL training for LLMs in heterogeneous GPU environments
Scheduling RL training as a constrained joint optimization problem
Overcoming challenges from complex computation and data dependencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Schedules RL training via constrained joint optimization
Decomposes search space with multi-level framework
Allocates search budget using successive halving
🔎 Similar Papers
No similar papers found.