🤖 AI Summary
This paper studies online multi-agent learning in zero-sum symmetric Markov games under unobservable rewards. Addressing the failure of conventional game-theoretic learning frameworks due to complete reward information absence, we first introduce three formal notions of symmetry and formulate the setting as an online learning problem conditioned on opponent actions and the known transition dynamics. The proposed algorithm requires no observation of instantaneous rewards—only state transitions and historical action sequences—and achieves asymptotically optimal responses against fully informed opponents in polynomial time. We provide theoretical guarantees showing that the algorithm asymptotically matches the opponent’s cumulative payoff both in matrix games and general Markov games. This significantly extends the class of learnable games under severe information asymmetry and establishes a novel paradigm for multi-agent decision-making with incomplete information.
📝 Abstract
Optimization under uncertainty is a fundamental problem in learning and decision-making, particularly in multi-agent systems. Previously, Feldman, Kalai, and Tennenholtz [2010] demonstrated the ability to efficiently compete in repeated symmetric two-player matrix games without observing payoffs, as long as the opponents actions are observed. In this paper, we introduce and formalize a new class of zero-sum symmetric Markov games, which extends the notion of symmetry from matrix games to the Markovian setting. We show that even without observing payoffs, a player who knows the transition dynamics and observes only the opponents sequence of actions can still compete against an adversary who may have complete knowledge of the game. We formalize three distinct notions of symmetry in this setting and show that, under these conditions, the learning problem can be reduced to an instance of online learning, enabling the player to asymptotically match the return of the opponent despite lacking payoff observations. Our algorithms apply to both matrix and Markov games, and run in polynomial time with respect to the size of the game and the number of episodes. Our work broadens the class of games in which robust learning is possible under severe informational disadvantage and deepens the connection between online learning and adversarial game theory.