đ€ AI Summary
To address the low training efficiency, poor generalization, and strong hyperparameter sensitivity of neural networks across varying scales, this paper proposes a scale-invariant adaptive optimization framework. The method unifies adaptive optimization, second-order information approximation, learning-rate scaling invariance, and gradient compression, thereby decoupling optimization from model size and hardware configuration. Its core innovation lies in a scale-robust update paradigm that ensures stable optimization dynamics under variations in parameter count, batch size, and device count. Extensive experiments across diverse architecturesâincluding MLPs, CNNs, and Transformersâand benchmarksâincluding CIFAR-10/100, ImageNet, and WikiTextâdemonstrate that the framework achieves 1.3â2.1Ă speedup over baseline optimizers, improved convergence stability, significantly reduced hyperparameter sensitivity, and eliminates the need for scale-specific hyperparameter tuning.
đ Abstract
This article reviews modern optimization methods for training neural networks with an emphasis on efficiency and scale. We present state-of-the-art optimization algorithms under a unified algorithmic template that highlights the importance of adapting to the structures in the problem. We then cover how to make these algorithms agnostic to the scale of the problem. Our exposition is intended as an introduction for both practitioners and researchers who wish to be involved in these exciting new developments.