An Improved Last-Iterate Convergence Rate for Anchored Gradient Descent Ascent

📅 2026-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work resolves an open question regarding whether the Anchored Gradient Descent Ascent (AGDA) algorithm achieves the tight $O(1/t)$ convergence rate in its last iterate for smooth convex-concave minimax optimization. Through rigorous mathematical analysis, it establishes for the first time that AGDA indeed attains a tight $O(1/t)$ convergence rate in terms of the squared gradient norm at the last iterate. The proof was assisted by an AI system and formally verified using the Lean theorem prover, with all code and formal proofs made publicly available. This result not only confirms the optimal last-iterate convergence bound for AGDA but also introduces a novel paradigm for the theoretical analysis of minimax optimization algorithms.
📝 Abstract
We analyze the last-iterate convergence of the Anchored Gradient Descent Ascent algorithm for smooth convex-concave min-max problems. While previous work established a last-iterate rate of $\mathcal{O}(1/t^{2-2p})$ for the squared gradient norm, where $p \in (1/2, 1)$, it remained an open problem whether the improved exact $\mathcal{O}(1/t)$ rate is achievable. In this work, we resolve this question in the affirmative. This result was discovered autonomously by an AI system capable of writing formal proofs in Lean. The Lean proof can be accessed at https://github.com/google-deepmind/formal-conjectures/pull/3675/commits/a13226b49fd3b897f4c409194f3bcbeb96a08515
Problem

Research questions and friction points this paper is trying to address.

last-iterate convergence
min-max optimization
convex-concave
gradient descent ascent
convergence rate
Innovation

Methods, ideas, or system contributions that make the work stand out.

last-iterate convergence
anchored gradient descent ascent
min-max optimization
formal proof
convergence rate
🔎 Similar Papers
No similar papers found.