🤖 AI Summary
In asynchronous parallel distributed reinforcement learning, policy gradient methods suffer from slow convergence and low efficiency due to computational and communication heterogeneity. To address this, we propose two novel asynchronous gradient aggregation algorithms: Rennala NIGT—designed for homogeneous settings to optimize AllReduce computation and communication complexity—and Malenia NIGT—for heterogeneous environments, delivering tighter convergence guarantees. Both algorithms are the first to theoretically and practically unify support for AllReduce while jointly accommodating heterogeneous delays and asynchronous updates. Through rigorous convergence analysis and extensive experiments across multiple environments, our methods significantly improve training throughput and policy performance, achieving state-of-the-art distributed RL efficiency on standard benchmarks.
📝 Abstract
We study distributed reinforcement learning (RL) with policy gradient methods under asynchronous and parallel computations and communications. While non-distributed methods are well understood theoretically and have achieved remarkable empirical success, their distributed counterparts remain less explored, particularly in the presence of heterogeneous asynchronous computations and communication bottlenecks. We introduce two new algorithms, Rennala NIGT and Malenia NIGT, which implement asynchronous policy gradient aggregation and achieve state-of-the-art efficiency. In the homogeneous setting, Rennala NIGT provably improves the total computational and communication complexity while supporting the AllReduce operation. In the heterogeneous setting, Malenia NIGT simultaneously handles asynchronous computations and heterogeneous environments with strictly better theoretical guarantees. Our results are further corroborated by experiments, showing that our methods significantly outperform prior approaches.