๐ค AI Summary
This work addresses the Hypergradient Descent Method (HDM)โa 25-year-old adaptive step-size heuristic lacking rigorous convergence guaranteesโby establishing, for the first time, a provable convergence theory for HDM within the online learning framework. The analysis uncovers the mechanism underlying its local superlinear convergence and identifies the root causes of its instability. Building on these insights, we propose two stabilized variants: HDM-HB, which integrates heavy-ball momentum, and HDM-Nesterov, incorporating Nesterov acceleration. Both methods achieve robust and efficient performance on deterministic convex optimization problems, matching L-BFGS in empirical convergence speed while requiring significantly less memory and cheaper per-iteration computation. They substantially outperform existing adaptive first-order methods in both stability and efficiency, offering a theoretically grounded, practical alternative to quasi-Newton approaches.
๐ Abstract
This paper investigates the convergence properties of the hypergradient descent method (HDM), a 25-year-old heuristic originally proposed for adaptive stepsize selection in stochastic first-order methods. We provide the first rigorous convergence analysis of HDM using the online learning framework of [Gao24] and apply this analysis to develop new state-of-the-art adaptive gradient methods with empirical and theoretical support. Notably, HDM automatically identifies the optimal stepsize for the local optimization landscape and achieves local superlinear convergence. Our analysis explains the instability of HDM reported in the literature and proposes efficient strategies to address it. We also develop two HDM variants with heavy-ball and Nesterov momentum. Experiments on deterministic convex problems show HDM with heavy-ball momentum (HDM-HB) exhibits robust performance and significantly outperforms other adaptive first-order methods. Moreover, HDM-HB often matches the performance of L-BFGS, an efficient and practical quasi-Newton method, using less memory and cheaper iterations.