🤖 AI Summary
This work resolves an open question regarding whether the Anchored Gradient Descent Ascent (AGDA) algorithm achieves the tight $O(1/t)$ convergence rate in its last iterate for smooth convex-concave minimax optimization. Through rigorous mathematical analysis, it establishes for the first time that AGDA indeed attains a tight $O(1/t)$ convergence rate in terms of the squared gradient norm at the last iterate. The proof was assisted by an AI system and formally verified using the Lean theorem prover, with all code and formal proofs made publicly available. This result not only confirms the optimal last-iterate convergence bound for AGDA but also introduces a novel paradigm for the theoretical analysis of minimax optimization algorithms.
📝 Abstract
We analyze the last-iterate convergence of the Anchored Gradient Descent Ascent algorithm for smooth convex-concave min-max problems. While previous work established a last-iterate rate of $\mathcal{O}(1/t^{2-2p})$ for the squared gradient norm, where $p \in (1/2, 1)$, it remained an open problem whether the improved exact $\mathcal{O}(1/t)$ rate is achievable. In this work, we resolve this question in the affirmative. This result was discovered autonomously by an AI system capable of writing formal proofs in Lean. The Lean proof can be accessed at https://github.com/google-deepmind/formal-conjectures/pull/3675/commits/a13226b49fd3b897f4c409194f3bcbeb96a08515