๐ค AI Summary
This paper addresses decentralized, asynchronous, communication-free, and model-free multi-agent reinforcement learning in infinite-horizon discounted Markov potential games. We propose a two-timescale asynchronous stochastic approximation framework that integrates local Q-function estimation with actor-criticโinspired policy updates, enabling decoupled learning using only individual reward observations. For the first time, we rigorously apply two-timescale analysis to establish almost-sure convergence of the learning dynamics to the set of Nash equilibria in this setting. Experiments demonstrate rapid convergence and robustness across standard potential game benchmarks. Our key contributions are: (1) a fully decentralized algorithm requiring no global information or coordination mechanisms; and (2) the first rigorous theoretical guarantee for the convergence of asynchronous Q-learning in decentralized Markov potential games. The analysis explicitly handles asynchrony, partial observability, and unknown environment dynamics while preserving equilibrium stability.
๐ Abstract
We study a multi-agent reinforcement learning dynamics, and analyze its convergence in infinite-horizon discounted Markov potential games. We focus on the independent and decentralized setting, where players do not know the game parameters, and cannot communicate or coordinate. In each stage, players update their estimate of Q-function that evaluates their total contingent payoff based on the realized one-stage reward in an asynchronous manner. Then, players independently update their policies by incorporating an optimal one-stage deviation strategy based on the estimated Q-function. Inspired by the actor-critic algorithm in single-agent reinforcement learning, a key feature of our learning dynamics is that agents update their Q-function estimates at a faster timescale than the policies. Leveraging tools from two-timescale asynchronous stochastic approximation theory, we characterize the convergent set of learning dynamics.