Independent and Decentralized Learning in Markov Potential Games

๐Ÿ“… 2022-05-29
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 18
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses decentralized, asynchronous, communication-free, and model-free multi-agent reinforcement learning in infinite-horizon discounted Markov potential games. We propose a two-timescale asynchronous stochastic approximation framework that integrates local Q-function estimation with actor-criticโ€“inspired policy updates, enabling decoupled learning using only individual reward observations. For the first time, we rigorously apply two-timescale analysis to establish almost-sure convergence of the learning dynamics to the set of Nash equilibria in this setting. Experiments demonstrate rapid convergence and robustness across standard potential game benchmarks. Our key contributions are: (1) a fully decentralized algorithm requiring no global information or coordination mechanisms; and (2) the first rigorous theoretical guarantee for the convergence of asynchronous Q-learning in decentralized Markov potential games. The analysis explicitly handles asynchrony, partial observability, and unknown environment dynamics while preserving equilibrium stability.
๐Ÿ“ Abstract
We study a multi-agent reinforcement learning dynamics, and analyze its convergence in infinite-horizon discounted Markov potential games. We focus on the independent and decentralized setting, where players do not know the game parameters, and cannot communicate or coordinate. In each stage, players update their estimate of Q-function that evaluates their total contingent payoff based on the realized one-stage reward in an asynchronous manner. Then, players independently update their policies by incorporating an optimal one-stage deviation strategy based on the estimated Q-function. Inspired by the actor-critic algorithm in single-agent reinforcement learning, a key feature of our learning dynamics is that agents update their Q-function estimates at a faster timescale than the policies. Leveraging tools from two-timescale asynchronous stochastic approximation theory, we characterize the convergent set of learning dynamics.
Problem

Research questions and friction points this paper is trying to address.

Multi-Agent Learning
Discounted Markov Potential Games
Strategy Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized Multi-Agent Learning
Q-Function Self-Adjustment
Differential Learning Rates
๐Ÿ”Ž Similar Papers
No similar papers found.
C
C. Maheshwari
Department of Electrical Engineering and Computer Sciences, University of California, Berkeley
Manxi Wu
Manxi Wu
University of California, Berkeley
Game TheoryMulti-agent LearningMechanism DesignSocietal Network
Druv Pai
Druv Pai
PhD Student, UC Berkeley
deep learning theorycomputer visionNLP
S
S. Sastry
Department of Electrical Engineering and Computer Sciences, University of California, Berkeley