Provably Convergent Actor-Critic in Risk-averse MARL

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Learning stationary policies in infinite-horizon general-sum Markov games (MGs) remains a fundamental open problem in Multi-Agent Reinforcement Learning (MARL). While stationary strategies are preferred for their practicality, computing stationary forms of classic game-theoretic equilibria is computationally intractable -- a stark contrast to the comparative ease of solving single-agent RL or zero-sum games. To bridge this gap, we study Risk-averse Quantal response Equilibria (RQE), a solution concept rooted in behavioral game theory that incorporates risk aversion and bounded rationality. We demonstrate that RQE possesses strong regularity conditions that make it uniquely amenable to learning in MGs. We propose a novel two-timescale Actor-Critic algorithm characterized by a fast-timescale actor and a slow-timescale critic. Leveraging the regularity of RQE, we prove that this approach achieves global convergence with finite-sample guarantees. We empirically validate our algorithm in several environments to demonstrate superior convergence properties compared to risk-neutral baselines.
Problem

Research questions and friction points this paper is trying to address.

Multi-Agent Reinforcement Learning
Markov Games
Stationary Policies
General-sum Games
Equilibrium Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Risk-averse Quantal Response Equilibrium
Multi-Agent Reinforcement Learning
Actor-Critic Algorithm
Global Convergence
Two-timescale Learning
🔎 Similar Papers
No similar papers found.