🤖 AI Summary
This work addresses decentralized learning in multi-agent reinforcement learning under both general games and Markov games. It is the first to achieve sublinear swap regret minimization—guaranteeing convergence of policy sequences to correlated equilibria—under fully decentralized, communication-free, and identical-policy-update settings. Methodologically, it introduces a weighted regret framework with path-length-adaptive weighting, integrating online policy optimization and decentralized learning mechanisms. The approach overcomes theoretical bottlenecks in adversarial Markov games for regret minimization, remaining statistically and computationally feasible. Theoretically, it achieves an $O(sqrt{T})$ swap regret bound—strictly improving upon prior results. Crucially, the algorithm requires no inter-agent communication, ensuring strong practicality and scalability across large-scale multi-agent systems.
📝 Abstract
An abundance of recent impossibility results establish that regret minimization in Markov games with adversarial opponents is both statistically and computationally intractable. Nevertheless, none of these results preclude the possibility of regret minimization under the assumption that all parties adopt the same learning procedure. In this work, we present the first (to our knowledge) algorithm for learning in general-sum Markov games that provides sublinear regret guarantees when executed by all agents. The bounds we obtain are for swap regret, and thus, along the way, imply convergence to a correlated equilibrium. Our algorithm is decentralized, computationally efficient, and does not require any communication between agents. Our key observation is that online learning via policy optimization in Markov games essentially reduces to a form of weighted regret minimization, with unknown weights determined by the path length of the agents' policy sequence. Consequently, controlling the path length leads to weighted regret objectives for which sufficiently adaptive algorithms provide sublinear regret guarantees.