Towards Simple and Provable Parameter-Free Adaptive Gradient Methods

📅 2024-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional adaptive optimization algorithms (e.g., AdaGrad, Adam) require manual learning rate tuning—a labor-intensive process that hampers training efficiency and reproducibility. Method: We propose AdaGrad++ and Adam++, two novel adaptive algorithms that eliminate the learning rate hyperparameter while retaining structural simplicity and rigorous convergence guarantees. Their core innovation lies in gradient-norm-based adaptive accumulation and dynamic step-size normalization—enabling automatic scaling without preset learning rate values. Contribution/Results: For the first time, these methods recover the optimal convergence rates of their canonical counterparts under both convex and non-convex settings, without any learning rate specification. Theoretical analysis is grounded in online learning and stochastic gradient descent frameworks, bridging a critical gap in parameter-free optimization by simultaneously ensuring simplicity, hyperparameter-freedom, and provable convergence. Empirical evaluation across diverse deep learning tasks demonstrates performance on par with carefully tuned baselines, while completely obviating learning rate search.

Technology Category

Application Category

📝 Abstract
Optimization algorithms such as AdaGrad and Adam have significantly advanced the training of deep models by dynamically adjusting the learning rate during the optimization process. However, adhoc tuning of learning rates poses a challenge, leading to inefficiencies in practice. To address this issue, recent research has focused on developing"learning-rate-free"or"parameter-free"algorithms that operate effectively without the need for learning rate tuning. Despite these efforts, existing parameter-free variants of AdaGrad and Adam tend to be overly complex and/or lack formal convergence guarantees. In this paper, we present AdaGrad++ and Adam++, novel and simple parameter-free variants of AdaGrad and Adam with convergence guarantees. We prove that AdaGrad++ achieves comparable convergence rates to AdaGrad in convex optimization without predefined learning rate assumptions. Similarly, Adam++ matches the convergence rate of Adam without relying on any conditions on the learning rates. Experimental results across various deep learning tasks validate the competitive performance of AdaGrad++ and Adam++.
Problem

Research questions and friction points this paper is trying to address.

Adaptive Learning Methods
Manual Learning Rate Adjustment
Training Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

AdaGrad++
Adam++
Automatic Learning Rate Adjustment
🔎 Similar Papers
No similar papers found.
Y
Yuanzhe Tao
School of Mathematical Sciences, Peking University, Beijing, China
Huizhuo Yuan
Huizhuo Yuan
Bytedance Seed
Xun Zhou
Xun Zhou
Professor of Computer Science, Harbin Institute of Technology, Shenzhen (HIT-SZ)
Big data analyticsSpatial databaseSpatial Data MiningGISmachine learning
Y
Yuan Cao
Department of Statistics and Actuarial Science, School of Computing and Data Science, the University of Hong Kong, Hong Kong
Quanquan Gu
Quanquan Gu
Associate Professor of Computer Science, UCLA
AGILarge Language ModelsReinforcement LearningNonconvex Optimization