🤖 AI Summary
In zero-sum games, existing algorithms often converge to non-Nash stable points and lack theoretical convergence guarantees. This paper proposes a second-order optimization method grounded in dynamical systems theory, the first to provably converge *exclusively* to local Nash equilibria (LNEs) in nonconvex-nonconcave zero-sum settings. It establishes local linear convergence rates and superlinear convergence within a neighborhood of the LNE. The method integrates a modified Gauss–Newton scheme with local curvature modeling, and its stability analysis ensures that only LNEs are asymptotically stable—i.e., globally attractive among all stationary points. Furthermore, it naturally extends to generalized Nash equilibrium problems with coupled constraints. Compared to state-of-the-art methods lacking convergence-rate guarantees, this work achieves significant advances in both theoretical rigor and practical convergence efficiency.
📝 Abstract
Zero-sum games arise in a wide variety of problems, including robust optimization and adversarial learning. However, algorithms deployed for finding a local Nash equilibrium in these games often converge to non-Nash stationary points. This highlights a key challenge: for any algorithm, the stability properties of its underlying dynamical system can cause non-Nash points to be potential attractors. To overcome this challenge, algorithms must account for subtleties involving the curvatures of players' costs. To this end, we leverage dynamical system theory and develop a second-order algorithm for finding a local Nash equilibrium in the smooth, possibly nonconvex-nonconcave, zero-sum game setting. First, we prove that this novel method guarantees convergence to only local Nash equilibria with a local linear convergence rate. We then interpret a version of this method as a modified Gauss-Newton algorithm with local superlinear convergence to the neighborhood of a point that satisfies first-order local Nash equilibrium conditions. In comparison, current related state-of-the-art methods do not offer convergence rate guarantees. Furthermore, we show that this approach naturally generalizes to settings with convex and potentially coupled constraints while retaining earlier guarantees of convergence to only local (generalized) Nash equilibria.