Bayesian Learning in Episodic Zero-Sum Games

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses efficient learning in finite-horizon, turn-based zero-sum Markov games with unknown transition and reward functions. The proposed algorithm employs Bayesian posterior sampling: at the beginning of each episode, agents sample a model from the posterior distribution and compute a Nash equilibrium policy based on the sampled model. The study provides the first theoretical guarantees on expected regret for both unilateral and bilateral posterior sampling in zero-sum settings, introducing a novel regret notion measured relative to the equilibrium of the true underlying game. The analysis establishes an expected regret bound of $O(HS\sqrt{ABHK \log(SABHK)})$, where $H$ is the horizon, $S$ the number of states, $A$ and $B$ the action counts for the two players, and $K$ the number of episodes. Empirical evaluations in a grid-world predator-prey environment demonstrate sublinear regret and superior performance over fictitious play.

Technology Category

Application Category

📝 Abstract
We study Bayesian learning in episodic, finite-horizon zero-sum Markov games with unknown transition and reward models. We investigate a posterior algorithm in which each player maintains a Bayesian posterior over the game model, independently samples a game model at the beginning of each episode, and computes an equilibrium policy for the sampled model. We analyze two settings: (i) Both players use the posterior sampling algorithm, and (ii) Only one player uses posterior sampling while the opponent follows an arbitrary learning algorithm. In each setting, we provide guarantees on the expected regret of the posterior sampling agent. Our notion of regret compares the expected total reward of the learning agent against the expected total reward under equilibrium policies of the true game. Our main theoretical result is an expected regret bound for the posterior sampling agent of order $O(HS\sqrt{ABHK\log(SABHK)})$ where $K$ is the number of episodes, $H$ is the episode length, $S$ is the number of states, and $A,B$ are the action space sizes of the two players. Experiments in a grid-world predator--prey domain illustrate the sublinear regret scaling and show that posterior sampling competes favorably with a fictitious-play baseline.
Problem

Research questions and friction points this paper is trying to address.

Bayesian learning
zero-sum Markov games
regret
episodic reinforcement learning
unknown dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian posterior sampling
zero-sum Markov games
regret bound
episodic reinforcement learning
multi-agent learning
🔎 Similar Papers
No similar papers found.
C
Chang-Wei Yueh
Department of Electrical and Computer Engineering, University of Southern California
A
Andy Zhao
Department of Electrical and Computer Engineering, University of Southern California
Ashutosh Nayyar
Ashutosh Nayyar
University of Southern California
Stochastic controlMulti-agent systemsReinforcement LearningGame theory
Rahul Jain
Rahul Jain
PhD Student, Elmore Family School of Electrical and Computer Engineering, Purdue University
Deep LearningComputer VisionCausality Graph modelsHuman-Computer Interaction