🤖 AI Summary
This work investigates sublinear regret behavior of no-regret–free algorithms—specifically fictitious play (FP) and constant-step-size online gradient descent (OGD)—in symmetric zero-sum games. For a weighted high-dimensional generalization of “rock-paper-scissors”-type games, we establish, for the first time, that both FP (under arbitrary tie-breaking rules) and OGD with sufficiently large constant step sizes achieve an $O(sqrt{T})$ regret upper bound under symmetric initialization—bypassing classical requirements of decaying step sizes or strong regularization. This confirms Karlin’s conjecture for this class of $n$-dimensional symmetric zero-sum games and reveals, for the first time, a “fast and aggressive” sublinear regret convergence phenomenon for gradient descent in zero-sum games beyond $2 imes2$. Technically, our analysis integrates symmetric game modeling, dual-space geometric reasoning, and iterative trajectory dynamics, thereby introducing a novel theoretical framework for analyzing regret-free algorithms.
📝 Abstract
This paper investigates the sublinear regret guarantees of two non-no-regret algorithms in zero-sum games: Fictitious Play, and Online Gradient Descent with constant stepsizes. In general adversarial online learning settings, both algorithms may exhibit instability and linear regret due to no regularization (Fictitious Play) or small amounts of regularization (Gradient Descent). However, their ability to obtain tighter regret bounds in two-player zero-sum games is less understood. In this work, we obtain strong new regret guarantees for both algorithms on a class of symmetric zero-sum games that generalize the classic three-strategy Rock-Paper-Scissors to a weighted, n-dimensional regime. Under symmetric initializations of the players' strategies, we prove that Fictitious Play with any tiebreaking rule has $O(sqrt{T})$ regret, establishing a new class of games for which Karlin's Fictitious Play conjecture holds. Moreover, by leveraging a connection between the geometry of the iterates of Fictitious Play and Gradient Descent in the dual space of payoff vectors, we prove that Gradient Descent, for almost all symmetric initializations, obtains a similar $O(sqrt{T})$ regret bound when its stepsize is a sufficiently large constant. For Gradient Descent, this establishes the first"fast and furious"behavior (i.e., sublinear regret without time-vanishing stepsizes) for zero-sum games larger than 2x2.