Deconstructing What Makes a Good Optimizer for Language Models

📅 2024-07-10
🏛️ arXiv.org
📈 Citations: 15
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically evaluates SGD, Adam, Adafactor, Lion, and Sophia for training autoregressive language models, assessing both optimization performance and hyperparameter robustness. Results show that, aside from SGD, mainstream adaptive optimizers exhibit no statistically significant differences in empirical performance; practical selection should thus prioritize memory efficiency and implementation simplicity. The study makes three key contributions: (1) Adalayer—a layer-adaptive variant of Adam—demonstrates that fine-grained adaptive updates for final-layer weights and LayerNorm parameters substantially improve convergence stability and learning-rate robustness; (2) Signum—combining gradient sign with momentum—achieves Adam-level performance with minimal computational and memory overhead; (3) extensive large-scale training, hyperparameter sensitivity analysis, and ablation studies empirically validate the necessity of adaptive mechanisms in critical model components, providing actionable insights for optimizer design and deployment.

Technology Category

Application Category

📝 Abstract
Training language models becomes increasingly expensive with scale, prompting numerous attempts to improve optimization efficiency. Despite these efforts, the Adam optimizer remains the most widely used, due to a prevailing view that it is the most effective approach. We aim to compare several optimization algorithms, including SGD, Adafactor, Adam, Lion, and Sophia in the context of autoregressive language modeling across a range of model sizes, hyperparameters, and architecture variants. Our findings indicate that, except for SGD, these algorithms all perform comparably both in their optimal performance and also in terms of how they fare across a wide range of hyperparameter choices. Our results suggest to practitioners that the choice of optimizer can be guided by practical considerations like memory constraints and ease of implementation, as no single algorithm emerged as a clear winner in terms of performance or stability to hyperparameter misspecification. Given our findings, we further dissect these approaches, examining two simplified versions of Adam: a) signed momentum (Signum) which we see recovers both the performance and hyperparameter stability of Adam and b) Adalayer, a layerwise variant of Adam which we introduce to study the impact on Adam's preconditioning for different layers of the network. Examining Adalayer leads us to the conclusion that, perhaps surprisingly, adaptivity on both the last layer and LayerNorm parameters in particular are necessary for retaining performance and stability to learning rate.
Problem

Research questions and friction points this paper is trying to address.

Compare optimization algorithms for language models.
Evaluate performance across model sizes and hyperparameters.
Analyze simplified Adam variants for stability and efficiency.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares SGD, Adafactor, Adam, Lion, Sophia optimizers
Introduces Adalayer, a layerwise variant of Adam
Proposes Signum, a simplified version of Adam
🔎 Similar Papers
No similar papers found.