🤖 AI Summary
The theoretical understanding of adaptive optimizers remains insufficient. Method: This work establishes a more accurate stochastic differential equation (SDE) framework—providing the first rigorous SDE characterization of SignSGD and refining SDE models for AdamW and RMSpropW—by integrating gradient noise analysis, curvature-aware modeling, and Euler–Maruyama numerical simulation, validated systematically across MLPs, CNNs, ResNets, and Transformers. Contributions/Results: (1) It uncovers deep couplings among adaptation mechanisms, gradient noise, and Hessian curvature; (2) it quantifies how SignSGD differs from SGD in convergence rate, stationary distribution, and robustness to heavy-tailed noise; (3) the proposed SDE models substantially outperform existing ones, faithfully reproducing optimization trajectories with high fidelity—theoretical predictions are corroborated across diverse architectures, offering new foundations for modeling training dynamics, designing scaling laws, and improving optimizer design.
📝 Abstract
Despite the vast empirical evidence supporting the efficacy of adaptive optimization methods in deep learning, their theoretical understanding is far from complete. This work introduces novel SDEs for commonly used adaptive optimizers: SignSGD, RMSprop(W), and Adam(W). These SDEs offer a quantitatively accurate description of these optimizers and help illuminate an intricate relationship between adaptivity, gradient noise, and curvature. Our novel analysis of SignSGD highlights a noteworthy and precise contrast to SGD in terms of convergence speed, stationary distribution, and robustness to heavy-tail noise. We extend this analysis to AdamW and RMSpropW, for which we observe that the role of noise is much more complex. Crucially, we support our theoretical analysis with experimental evidence by verifying our insights: this includes numerically integrating our SDEs using Euler-Maruyama discretization on various neural network architectures such as MLPs, CNNs, ResNets, and Transformers. Our SDEs accurately track the behavior of the respective optimizers, especially when compared to previous SDEs derived for Adam and RMSprop. We believe our approach can provide valuable insights into best training practices and novel scaling rules.