Scale-Invariant Regret Matching and Online Learning with Optimal Convergence: Bridging Theory and Practice in Zero-Sum Games

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In zero-sum games, a longstanding gap persists between the theoretical optimal convergence rate of $O(T^{-1})$ and the practical rate $Omega(T^{-1/2})$ achieved by state-of-the-art algorithms such as PRM$^+$. This work proposes the first scale-invariant, hyperparameter-free variant of PRM$^+$: it introduces a regret-vector-norm-non-decreasing update rule, integrates optimistic gradient estimation with adaptive learning rates, and adopts an RVU-style analysis framework. For the first time among PRM-family methods, it simultaneously achieves $O(T^{-1})$ average-iterate convergence and $O(T^{-1/2})$ last-iterate convergence. The algorithm requires no tuning—neither step sizes nor other hyperparameters—and matches PRM$^+$’s performance on standard game benchmarks. By bridging the critical gap between first-order optimization theory and practical game-solving performance, this work advances both theoretical understanding and empirical applicability in equilibrium computation.

Technology Category

Application Category

📝 Abstract
A considerable chasm has been looming for decades between theory and practice in zero-sum game solving through first-order methods. Although a convergence rate of $T^{-1}$ has long been established since Nemirovski's mirror-prox algorithm and Nesterov's excessive gap technique in the early 2000s, the most effective paradigm in practice is *counterfactual regret minimization*, which is based on *regret matching* and its modern variants. In particular, the state of the art across most benchmarks is *predictive* regret matching$^+$ (PRM$^+$), in conjunction with non-uniform averaging. Yet, such algorithms can exhibit slower $Ω(T^{-1/2})$ convergence even in self-play. In this paper, we close the gap between theory and practice. We propose a new scale-invariant and parameter-free variant of PRM$^+$, which we call IREG-PRM$^+$. We show that it achieves $T^{-1/2}$ best-iterate and $T^{-1}$ (i.e., optimal) average-iterate convergence guarantees, while also being on par with PRM$^+$ on benchmark games. From a technical standpoint, we draw an analogy between IREG-PRM$^+$ and optimistic gradient descent with *adaptive* learning rate. The basic flaw of PRM$^+$ is that the ($ell_2$-)norm of the regret vector -- which can be thought of as the inverse of the learning rate -- can decrease. By contrast, we design IREG-PRM$^+$ so as to maintain the invariance that the norm of the regret vector is nondecreasing. This enables us to derive an RVU-type bound for IREG-PRM$^+$, the first such property that does not rely on introducing additional hyperparameters to enforce smoothness. Furthermore, we find that IREG-PRM$^+$ performs on par with an adaptive version of optimistic gradient descent that we introduce whose learning rate depends on the misprediction error, demystifying the effectiveness of the regret matching family *vis-a-vis* more standard optimization techniques.
Problem

Research questions and friction points this paper is trying to address.

Bridging theory-practice gap in zero-sum game solving algorithms
Improving slow convergence rates of regret minimization methods
Developing parameter-free algorithms with optimal convergence guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scale-invariant regret matching variant IREG-PRM+
Maintains nondecreasing regret vector norm
Achieves optimal T^{-1} average-iterate convergence
🔎 Similar Papers
No similar papers found.