🤖 AI Summary
To address AdaGrad’s low computational efficiency, lack of rigorous scale invariance, and insufficient generalization robustness, this paper proposes KATE—a novel adaptive optimizer. Methodologically, KATE establishes, for the first time, a rigorous proof of scale invariance for AdaGrad within generalized linear models (GLMs); it eliminates the square-root operation in gradient accumulation and integrates diagonal Hessian approximation with normalized gradient scaling to yield an efficient, numerically stable adaptive learning rate mechanism. Theoretically, KATE achieves an $O(log T / sqrt{T})$ convergence rate for nonconvex smooth optimization—strictly improving upon AdaGrad’s rate. Empirically, KATE consistently outperforms AdaGrad on image and text classification tasks, offering faster training, enhanced hyperparameter robustness, and competitive or superior performance relative to Adam.
📝 Abstract
Adaptive methods are extremely popular in machine learning as they make learning rate tuning less expensive. This paper introduces a novel optimization algorithm named KATE, which presents a scale-invariant adaptation of the well-known AdaGrad algorithm. We prove the scale-invariance of KATE for the case of Generalized Linear Models. Moreover, for general smooth non-convex problems, we establish a convergence rate of $O left(frac{log T}{sqrt{T}}
ight)$ for KATE, matching the best-known ones for AdaGrad and Adam. We also compare KATE to other state-of-the-art adaptive algorithms Adam and AdaGrad in numerical experiments with different problems, including complex machine learning tasks like image classification and text classification on real data. The results indicate that KATE consistently outperforms AdaGrad and matches/surpasses the performance of Adam in all considered scenarios.