Last-Iterate Convergence of Adaptive Riemannian Gradient Descent for Equilibrium Computation

📅 2023-06-29
📈 Citations: 5
Influential: 2
📄 PDF
🤖 AI Summary
This work addresses the linear convergence of the last iterate in computing Nash equilibria of games defined on Riemannian manifolds, focusing on geodesically strongly monotone games and Riemannian gradient descent (RGD). Methodologically, it integrates tools from Riemannian optimization, geodesic monotonicity analysis, and game theory. The contributions are threefold: (i) it establishes the first geometry-agnostic linear convergence guarantee for the last iterate of RGD—i.e., independent of prior knowledge of manifold curvature; (ii) it proposes FARGD, an adaptive algorithm that achieves a convergence rate matching that of optimal non-adaptive methods without requiring estimates of the condition number; and (iii) it designs stochastic RGD (SRGD), attaining optimal sample complexity under stochastic gradient noise. Collectively, these advances significantly enhance the robustness and practical applicability of equilibrium computation on Riemannian manifolds.
📝 Abstract
Equilibrium computation on Riemannian manifolds provides a unifying framework for numerous problems in machine learning and data analytics. One of the simplest yet most fundamental methods is Riemannian gradient descent (RGD). While its Euclidean counterpart has been extensively studied, it remains unclear how the manifold curvature affects RGD in game-theoretic settings. This paper addresses this gap by establishing new convergence results for extit{geodesic strongly monotone} games. Our key result shows that RGD attains last-iterate linear convergence in a extit{geometry-agnostic} fashion, a key property for applications in machine learning. We extend this guarantee to stochastic and adaptive variants -- SRGD and FARGD -- and establish that: (i) the sample complexity of SRGD is geometry-agnostic and optimal with respect to noise; (ii) FARGD matches the convergence rate of its non-adaptive counterpart up to constant factors, while avoiding reliance on the condition number. Overall, this paper presents the first geometry-agnostic last-iterate convergence analysis for games beyond the Euclidean settings, underscoring the surprising power of RGD -- despite its simplicity -- in solving a wide spectrum of machine learning problems.
Problem

Research questions and friction points this paper is trying to address.

Analyzing Riemannian gradient descent convergence on curved manifolds
Establishing geometry-agnostic linear convergence for geodesic monotone games
Extending convergence guarantees to stochastic and adaptive variants
Innovation

Methods, ideas, or system contributions that make the work stand out.

Riemannian gradient descent achieves geometry-agnostic linear convergence
Stochastic RGD has optimal noise-robust sample complexity
Adaptive RGD avoids dependence on problem condition number
🔎 Similar Papers
No similar papers found.