AdaGrad-Diff: A New Version of the Adaptive Gradient Algorithm

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Vanilla gradient methods are often highly sensitive to the choice of stepsize, which typically requires manual tuning. Adaptive methods alleviate this issue and have therefore become widely used. Among them, AdaGrad has been particularly influential. In this paper, we propose an AdaGrad-style adaptive method in which the adaptation is driven by the cumulative squared norms of successive gradient differences rather than gradient norms themselves. The key idea is that when gradients vary little across iterations, the stepsize is not unnecessarily reduced, while significant gradient fluctuations, reflecting curvature or instability, lead to automatic stepsize damping. Numerical experiments demonstrate that the proposed method is more robust than AdaGrad in several practically relevant settings.
Problem

Research questions and friction points this paper is trying to address.

adaptive gradient
stepsize sensitivity
gradient variation
optimization robustness
AdaGrad
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive gradient
gradient difference
stepsize adaptation
AdaGrad
optimization robustness
🔎 Similar Papers
M
Matia Bojovic
Computational Statistics and Machine Learning, Istituto Italiano di Tecnologia, Genoa, Italy; Department of Mathematics, University of Genoa, Genoa, Italy
Saverio Salzo
Saverio Salzo
Accociate Professor, Sapienza University of Rome, Italy
OptimizationMachine Learning
M
Massimiliano Pontil
Computational Statistics and Machine Learning, Istituto Italiano di Tecnologia, Genoa, Italy; Department of Computer Science, University College London, London, UK