🤖 AI Summary
This work addresses the key challenges in multi-agent reinforcement learning—achieving efficient coordination, avoiding conflicts, and satisfying constraints—by proposing the Action Graph Policy (AGP). AGP enables decentralized decision-making by constructing an action dependency graph and a coordination context, allowing agents to reason over global action relationships. Theoretical analysis demonstrates that AGP’s joint policy representation is strictly more expressive than independent policies and surpasses existing centralized value decomposition approaches. Empirical results show that in partially observable tasks with anti-coordination penalties, AGP achieves success rates of 80–95%, substantially outperforming state-of-the-art MARL methods, which attain only 10–25%. Furthermore, AGP consistently leads across diverse multi-agent environments.
📝 Abstract
Coordinating actions is the most fundamental form of cooperation in multi-agent reinforcement learning (MARL). Successful decentralized decision-making often depends not only on good individual actions, but on selecting compatible actions across agents to synchronize behavior, avoid conflicts, and satisfy global constraints. In this paper, we propose Action Graph Policies (AGP), that model dependencies among agents' available action choices. It constructs, what we call, \textit{coordination contexts}, that enable agents to condition their decisions on global action dependencies. Theoretically, we show that AGPs induce a strictly more expressive joint policy compared to fully independent policies and can realize coordinated joint actions that are provably more optimal than greedy execution even from centralized value-decomposition methods. Empirically, we show that AGP achieves 80-95\% success on canonical coordination tasks with partial observability and anti-coordination penalties, where other MARL methods reach only 10-25\%. We further demonstrate that AGP consistently outperforms these baselines in diverse multi-agent environments.