🤖 AI Summary
This work addresses the lack of theoretical understanding regarding the existence, structure, and learnability of Nash equilibria in generalized utility Markov games (GUMGs). By introducing an agent-wise gradient dominance condition, it establishes—for the first time—an equivalence between Nash equilibria and fixed points of projected pseudo-gradient dynamics, yielding a concise existence proof via Brouwer’s fixed-point theorem. Building on the policy gradient theorem and sample complexity analysis, the paper proposes a model-free policy gradient algorithm that guarantees iterative convergence to an approximate Nash equilibrium with explicit sample complexity bounds in potential GUMGs. This advances beyond prior work confined to zero-sum settings, extending the theory to common-interest convex Markov games and establishing both the existence and learnability of Markov perfect equilibria.
📝 Abstract
Convex Markov Games (cMGs) were recently introduced as a broad class of multi-agent learning problems that generalize Markov games to settings where strategic agents optimize general utilities beyond additive rewards. While cMGs expand the modeling frontier, their theoretical foundations, particularly the structure of Nash equilibria (NE) and guarantees for learning algorithms, are not yet well understood. In this work, we address these gaps for an extension of cMGs, which we term General Utility Markov Games (GUMGs), capturing new applications requiring coupling between agents'occupancy measures. We prove that in GUMGs, Nash equilibria coincide with the fixed points of projected pseudo-gradient dynamics (i.e., first-order stationary points), enabled by a novel agent-wise gradient domination property. This insight also yields a simple proof of NE existence using Brouwer's fixed-point theorem. We further show the existence of Markov perfect equilibria. Building on this characterization, we establish a policy gradient theorem for GUMGs and design a model-free policy gradient algorithm. For potential GUMGs, we establish iteration complexity guarantees for computing approximate-NE under exact gradients and provide sample complexity bounds in both the generative model and on-policy settings. Our results extend beyond prior work restricted to zero-sum cMGs, providing the first theoretical analysis of common-interest cMGs.