Highly Parallelized Reinforcement Learning Training with Relaxed Assignment Dependencies

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep reinforcement learning (DRL) training systems suffer from low parallelism and suboptimal efficiency due to strong inter-task data dependencies. To address this, we propose TianJi, a high-throughput distributed DRL training system. Its core innovations are: (1) a novel relaxed assignment dependency mechanism that decouples sample generation from consumption and dynamically controls sample staleness to ensure convergence; and (2) an event-driven asynchronous communication framework, distributed sample quality control, and a loosely coupled component architecture. Experiments show that, on an 8-node cluster, TianJi achieves 1.6× faster convergence and 7.13× higher throughput than XingTian, with a peak convergence speedup ratio of 4.37. Moreover, its data transmission efficiency approaches the hardware limit, and it significantly outperforms both RLlib and XingTian on on-policy algorithms.

Technology Category

Application Category

📝 Abstract
As the demands for superior agents grow, the training complexity of Deep Reinforcement Learning (DRL) becomes higher. Thus, accelerating training of DRL has become a major research focus. Dividing the DRL training process into subtasks and using parallel computation can effectively reduce training costs. However, current DRL training systems lack sufficient parallelization due to data assignment between subtask components. This assignment issue has been ignored, but addressing it can further boost training efficiency. Therefore, we propose a high-throughput distributed RL training system called TianJi. It relaxes assignment dependencies between subtask components and enables event-driven asynchronous communication. Meanwhile, TianJi maintains clear boundaries between subtask components. To address convergence uncertainty from relaxed assignment dependencies, TianJi proposes a distributed strategy based on the balance of sample production and consumption. The strategy controls the staleness of samples to correct their quality, ensuring convergence. We conducted extensive experiments. TianJi achieves a convergence time acceleration ratio of up to 4.37 compared to related comparison systems. When scaled to eight computational nodes, TianJi shows a convergence time speedup of 1.6 and a throughput speedup of 7.13 relative to XingTian, demonstrating its capability to accelerate training and scalability. In data transmission efficiency experiments, TianJi significantly outperforms other systems, approaching hardware limits. TianJi also shows effectiveness in on-policy algorithms, achieving convergence time acceleration ratios of 4.36 and 2.95 compared to RLlib and XingTian. TianJi is accessible at https://github.com/HiPRL/TianJi.git.
Problem

Research questions and friction points this paper is trying to address.

Accelerate Deep Reinforcement Learning training
Enhance parallelization in DRL systems
Address data assignment dependencies in subtasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Relaxed assignment dependencies enhance parallelism
Event-driven asynchronous communication boosts efficiency
Distributed strategy balances sample quality
🔎 Similar Papers
No similar papers found.
Z
Zhouyu He
College of Computer Science and Technology, National University of Defense Technology; National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology
Peng Qiao
Peng Qiao
National University of Defense Technology
image processingcomputer visionmachine learningdeep learning
Rongchun Li
Rongchun Li
National University of Defense Technology
计算机视觉、深度学习、计算学习、GPU、FPGA、高性能计算、软件无线电
Y
Yong Dou
College of Computer Science and Technology, National University of Defense Technology; National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology
Yusong Tan
Yusong Tan
National University of Defense Technology
computeroperating systemcloudAI