Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games

📅 2021-06-03
🏛️ International Conference on Learning Representations
📈 Citations: 127
Influential: 32
📄 PDF
🤖 AI Summary
Existing multi-agent coordination mechanisms in Markov games lack a unified potential-function-based modeling framework. Method: We introduce Markov Potential Games (MPGs), the first extension of classical potential games to state-dependent, dynamic multi-agent settings. We generalize gradient-dominance from single-agent to multi-agent Markov settings and establish a theoretically rigorous yet algorithmically implementable MPG analysis framework. Contribution/Results: We prove that independent policy gradient methods globally converge to deterministic Nash equilibria in MPGs at a linear rate; we characterize the essential constraints and degrees of freedom in constructing state-dependent potential functions; and we provide the first provably convergent potential-game foundation for state-aware cooperative learning.
📝 Abstract
Potential games are arguably one of the most important and widely studied classes of normal form games. They define the archetypal setting of multi-agent coordination as all agent utilities are perfectly aligned with each other via a common potential function. Can this intuitive framework be transplanted in the setting of Markov Games? What are the similarities and differences between multi-agent coordination with and without state dependence? We present a novel definition of Markov Potential Games (MPG) that generalizes prior attempts at capturing complex stateful multi-agent coordination. Counter-intuitively, insights from normal-form potential games do not carry over as MPGs can consist of settings where state-games can be zero-sum games. In the opposite direction, Markov games where every state-game is a potential game are not necessarily MPGs. Nevertheless, MPGs showcase standard desirable properties such as the existence of deterministic Nash policies. In our main technical result, we prove fast convergence of independent policy gradient to Nash policies by adapting recent gradient dominance property arguments developed for single agent MDPs to multi-agent learning settings.
Problem

Research questions and friction points this paper is trying to address.

Defining Markov Potential Games for stateful multi-agent coordination
Analyzing differences between normal-form and Markov potential games
Proving convergence of independent policy gradient to Nash policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defined Markov Potential Games generalization
Proved deterministic Nash policies existence
Adapted gradient dominance for multi-agent convergence
🔎 Similar Papers
No similar papers found.