Finite-Sample Guarantees for Best-Response Learning Dynamics in Zero-Sum Matrix Games

📅 2024-07-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the finite-sample convergence of two-player best-response dynamics in zero-sum matrix games, under both full-information (where opponents’ strategies and the payoff matrix are observable) and minimal-information (where only one’s own realized payoffs are observed, yielding fully decoupled learning) settings. For the latter, we propose the first exploration-free two-timescale algorithm, integrating smoothed best-response updates with a local payoff estimator inspired by temporal-difference (TD) learning. Using two-timescale stochastic approximation theory, we establish strong theoretical guarantees: the algorithm converges to an ε-Nash equilibrium within O(1/ε²) sample complexity—improving upon existing exploration-free methods. Empirical validation is conducted on standard zero-sum game benchmarks.

Technology Category

Application Category

📝 Abstract
We study best-response type learning dynamics for two player zero-sum matrix games. We consider two settings that are distinguished by the type of information that each player has about the game and their opponent's strategy. The first setting is the full information case, in which each player knows their own and the opponent's payoff matrices and observes the opponent's mixed strategy. The second setting is the minimal information case, where players do not observe the opponent's strategy and are not aware of either of the payoff matrices (instead they only observe their realized payoffs). For this setting, also known as the radically uncoupled case in the learning in games literature, we study a two-timescale learning dynamics that combine smoothed best-response type updates for strategy estimates with a TD-learning update to estimate a local payoff function. For these dynamics, without additional exploration, we provide polynomial-time finite-sample guarantees for convergence to an $epsilon$-Nash equilibrium.
Problem

Research questions and friction points this paper is trying to address.

Analyze learning dynamics in zero-sum polymatrix games
Compare full vs minimal information settings
Provide convergence guarantees for ε-Nash equilibrium
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-timescale learning dynamics for zero-sum games
Smoothed best-response updates for strategy estimates
TD-learning updates for local payoff estimation
🔎 Similar Papers
No similar papers found.