🤖 AI Summary
Conventional adaptive optimization algorithms (e.g., AdaGrad, Adam) require manual learning rate tuning—a labor-intensive process that hampers training efficiency and reproducibility.
Method: We propose AdaGrad++ and Adam++, two novel adaptive algorithms that eliminate the learning rate hyperparameter while retaining structural simplicity and rigorous convergence guarantees. Their core innovation lies in gradient-norm-based adaptive accumulation and dynamic step-size normalization—enabling automatic scaling without preset learning rate values.
Contribution/Results: For the first time, these methods recover the optimal convergence rates of their canonical counterparts under both convex and non-convex settings, without any learning rate specification. Theoretical analysis is grounded in online learning and stochastic gradient descent frameworks, bridging a critical gap in parameter-free optimization by simultaneously ensuring simplicity, hyperparameter-freedom, and provable convergence. Empirical evaluation across diverse deep learning tasks demonstrates performance on par with carefully tuned baselines, while completely obviating learning rate search.
📝 Abstract
Optimization algorithms such as AdaGrad and Adam have significantly advanced the training of deep models by dynamically adjusting the learning rate during the optimization process. However, adhoc tuning of learning rates poses a challenge, leading to inefficiencies in practice. To address this issue, recent research has focused on developing"learning-rate-free"or"parameter-free"algorithms that operate effectively without the need for learning rate tuning. Despite these efforts, existing parameter-free variants of AdaGrad and Adam tend to be overly complex and/or lack formal convergence guarantees. In this paper, we present AdaGrad++ and Adam++, novel and simple parameter-free variants of AdaGrad and Adam with convergence guarantees. We prove that AdaGrad++ achieves comparable convergence rates to AdaGrad in convex optimization without predefined learning rate assumptions. Similarly, Adam++ matches the convergence rate of Adam without relying on any conditions on the learning rates. Experimental results across various deep learning tasks validate the competitive performance of AdaGrad++ and Adam++.