$K$-Level Policy Gradients for Multi-Agent Reinforcement Learning

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-agent reinforcement learning (MARL), actor-critic algorithms suffer from coordination failure due to asynchronous policy updates, exacerbating non-stationarity. To address this, we propose a recursive K-level policy gradient method: each agent explicitly models and recursively predicts the policy responses of other agents within the same time step during its own policy update, thereby mitigating non-stationarity and enhancing cooperative stability. Our method is modular and seamlessly integrates with mainstream MARL frameworks—including MAPPO, MADDPG, and FACMAC—without architectural modification. Empirical evaluation on StarCraft II and multi-agent MuJoCo benchmarks demonstrates significantly improved convergence speed and final performance. Theoretical and empirical analyses confirm enhanced convergence to local Nash equilibria and superior cross-task generalization capability, establishing substantial advances in both stability and scalability of decentralized MARL.

Technology Category

Application Category

📝 Abstract
Actor-critic algorithms for deep multi-agent reinforcement learning (MARL) typically employ a policy update that responds to the current strategies of other agents. While being straightforward, this approach does not account for the updates of other agents at the same update step, resulting in miscoordination. In this paper, we introduce the $K$-Level Policy Gradient (KPG), a method that recursively updates each agent against the updated policies of other agents, speeding up the discovery of effective coordinated policies. We theoretically prove that KPG with finite iterates achieves monotonic convergence to a local Nash equilibrium under certain conditions. We provide principled implementations of KPG by applying it to the deep MARL algorithms MAPPO, MADDPG, and FACMAC. Empirically, we demonstrate superior performance over existing deep MARL algorithms in StarCraft II and multi-agent MuJoCo.
Problem

Research questions and friction points this paper is trying to address.

Addresses miscoordination in multi-agent reinforcement learning
Recursively updates policies against other agents' updates
Achieves monotonic convergence to local Nash equilibrium
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recursive policy updates against updated opponents
Monotonic convergence to local Nash equilibrium
Implementation across multiple deep MARL algorithms
🔎 Similar Papers
No similar papers found.
A
Aryaman Reddi
Department of Computer Science, TU Darmstadt, Germany; Hessian Center for Artificial Intelligence (Hessian.ai), Germany
Gabriele Tiboni
Gabriele Tiboni
PhD Student, TU Darmstadt & Politecnico di Torino
Reinforcement LearningRobot LearningSim-to-Real3D Learning
J
Jan Peters
Department of Computer Science, TU Darmstadt, Germany; Hessian Center for Artificial Intelligence (Hessian.ai), Germany; German Research Center for AI (DFKI), Systems AI for Robot Learning, Germany; Center for Cognitive Science, TU Darmstadt, Germany
Carlo D'Eramo
Carlo D'Eramo
Professor of Reinforcement Learning @ University of Würzburg | Group leader @ TU Darmstadt
Reinforcement LearningDeep LearningMulti-Task LearningTransfer LearningMulti-Agent