HomeAdam: Adam and AdamW Algorithms Sometimes Go Home to Obtain Better Provable Generalization

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Although Adam and AdamW optimizers exhibit rapid convergence, they often generalize worse than SGD, and their theoretical generalization error bounds are relatively loose. This work, grounded in algorithmic stability theory, establishes for the first time a generalization error bound of $O(\hat{\rho}^{-2T}/N)$ for square-root-free variants of Adam(W), denoted Adam(W)-srf. Furthermore, the authors propose a novel optimizer, HomeAdam(W), which periodically switches to momentum SGD during training, thereby integrating adaptive gradient scaling with momentum dynamics. This hybrid approach maintains fast convergence while achieving a significantly tighter generalization error bound of $O(1/N)$, markedly outperforming standard Adam and AdamW. Empirical evaluations confirm the superior generalization performance of the proposed method.

Technology Category

Application Category

📝 Abstract
Adam and AdamW are a class of default optimizers for training deep learning models in machine learning. These adaptive algorithms converge faster but generalize worse compared to SGD. In fact, their proved generalization error $O(\frac{1}{\sqrt{N}})$ also is larger than $O(\frac{1}{N})$ of SGD, where $N$ denotes training sample size. Recently, although some variants of Adam have been proposed to improve its generalization, their improved generalizations are still unexplored in theory. To fill this gap, in the paper, we restudy generalization of Adam and AdamW via algorithmic stability, and first prove that Adam and AdamW without square-root (i.e., Adam(W)-srf) have a generalization error $O(\frac{\hatρ^{-2T}}{N})$, where $T$ denotes iteration number and $\hatρ>0$ denotes the smallest element of second-order momentum plus a small positive number. To improve generalization, we propose a class of efficient clever Adam (i.e., HomeAdam(W)) algorithms via sometimes returning momentum-based SGD. Moreover, we prove that our HomeAdam(W) have a smaller generalization error $O(\frac{1}{N})$ than $O(\frac{\hatρ^{-2T}}{N})$ of Adam(W)-srf, since $\hatρ$ is generally very small. In particular, it is also smaller than the existing $O(\frac{1}{\sqrt{N}})$ of Adam(W). Meanwhile, we prove our HomeAdam(W) have a faster convergence rate of $O(\frac{1}{T^{1/4}})$ than $O(\frac{\breveρ^{-1}}{T^{1/4}})$ of the Adam(W)-srf, where $\breveρ\leq\hatρ$ also is very small. Extensive numerical experiments demonstrate efficiency of our HomeAdam(W) algorithms.
Problem

Research questions and friction points this paper is trying to address.

generalization
Adam optimizer
algorithmic stability
deep learning
optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

HomeAdam
generalization error
algorithmic stability
adaptive optimization
convergence rate
🔎 Similar Papers
No similar papers found.