🤖 AI Summary
In multi-agent reinforcement learning (MARL), actor-critic algorithms suffer from coordination failure due to asynchronous policy updates, exacerbating non-stationarity. To address this, we propose a recursive K-level policy gradient method: each agent explicitly models and recursively predicts the policy responses of other agents within the same time step during its own policy update, thereby mitigating non-stationarity and enhancing cooperative stability. Our method is modular and seamlessly integrates with mainstream MARL frameworks—including MAPPO, MADDPG, and FACMAC—without architectural modification. Empirical evaluation on StarCraft II and multi-agent MuJoCo benchmarks demonstrates significantly improved convergence speed and final performance. Theoretical and empirical analyses confirm enhanced convergence to local Nash equilibria and superior cross-task generalization capability, establishing substantial advances in both stability and scalability of decentralized MARL.
📝 Abstract
Actor-critic algorithms for deep multi-agent reinforcement learning (MARL) typically employ a policy update that responds to the current strategies of other agents. While being straightforward, this approach does not account for the updates of other agents at the same update step, resulting in miscoordination. In this paper, we introduce the $K$-Level Policy Gradient (KPG), a method that recursively updates each agent against the updated policies of other agents, speeding up the discovery of effective coordinated policies. We theoretically prove that KPG with finite iterates achieves monotonic convergence to a local Nash equilibrium under certain conditions. We provide principled implementations of KPG by applying it to the deep MARL algorithms MAPPO, MADDPG, and FACMAC. Empirically, we demonstrate superior performance over existing deep MARL algorithms in StarCraft II and multi-agent MuJoCo.