🤖 AI Summary
This work addresses the slow last-iterate convergence of optimistic algorithms—such as Optimistic Mirror Descent (OMWU) and Optimistic Gradient Descent Ascent (OGDA)—to Nash equilibria in two-player zero-sum games. We establish, for the first time, a rigorous lower bound: any optimistic Follow-the-Regularized-Leader (FTRL) algorithm that does not rapidly forget past iterates necessarily exhibits a non-vanishing duality gap in its final iterate on certain $2 imes 2$ games. This reveals that rapid forgetting is a *necessary* condition—not merely an artifact of prior analytical limitations—for achieving fast last-iterate convergence. Our analysis integrates tools from game theory, online convex optimization, and duality gap theory. The result fundamentally explains the intrinsic slow convergence of OMWU and related methods, and formally establishes the theoretical necessity of forgetting mechanisms in optimistic algorithm design, thereby providing a foundational principle for developing new, efficient forgetting-aware optimistic algorithms.
📝 Abstract
Self-play via online learning is one of the premier ways to solve large-scale two-player zero-sum games, both in theory and practice. Particularly popular algorithms include optimistic multiplicative weights update (OMWU) and optimistic gradient-descent-ascent (OGDA). While both algorithms enjoy $O(1/T)$ ergodic convergence to Nash equilibrium in two-player zero-sum games, OMWU offers several advantages including logarithmic dependence on the size of the payoff matrix and $widetilde{O}(1/T)$ convergence to coarse correlated equilibria even in general-sum games. However, in terms of last-iterate convergence in two-player zero-sum games, an increasingly popular topic in this area, OGDA guarantees that the duality gap shrinks at a rate of $O(1/sqrt{T})$, while the best existing last-iterate convergence for OMWU depends on some game-dependent constant that could be arbitrarily large. This begs the question: is this potentially slow last-iterate convergence an inherent disadvantage of OMWU, or is the current analysis too loose? Somewhat surprisingly, we show that the former is true. More generally, we prove that a broad class of algorithms that do not forget the past quickly all suffer the same issue: for any arbitrarily small $delta>0$, there exists a $2 imes 2$ matrix game such that the algorithm admits a constant duality gap even after $1/delta$ rounds. This class of algorithms includes OMWU and other standard optimistic follow-the-regularized-leader algorithms.