🤖 AI Summary
This paper addresses the efficient computation of Nash equilibria in two-player zero-sum games by proposing the first **direct second-order optimization method**. The algorithm is built upon a Douglas–Rachford-type splitting framework, where each subproblem is solved via an embedded semismooth Newton (SSN) method. Crucially, it introduces a novel hybrid adaptive switching strategy that integrates Predictive Regret Matching⁺ (PRM⁺) with SSN. Theoretically, the method is proven to achieve both **global convergence** and **local superlinear convergence**, overcoming the fundamental limitation of existing first-order methods—such as PRM⁺—which only guarantee sublinear rates. Empirical evaluation on matrix games demonstrates that, for high-accuracy solutions, the proposed method accelerates convergence by an order of magnitude over PRM⁺. To the best of our knowledge, this is the first method for zero-sum games to achieve *verifiably* superlinear convergence while delivering substantial computational gains.
📝 Abstract
We introduce, to our knowledge, the first direct second-order method for computing Nash equilibria in two-player zero-sum games. To do so, we construct a Douglas-Rachford-style splitting formulation, which we then solve with a semi-smooth Newton (SSN) method. We show that our algorithm enjoys local superlinear convergence. In order to augment the fast local behavior of our SSN method with global efficiency guarantees, we develop a hybrid method that combines our SSN method with the state-of-the-art first-order method for game solving, Predictive Regret Matching$^+$ (PRM$^+$). Our hybrid algorithm leverages the global progress provided by PRM$^+$, while achieving a local superlinear convergence rate once it switches to SSN near a Nash equilibrium. Numerical experiments on matrix games demonstrate order-of-magnitude speedups over PRM$^+$ for high-precision solutions.