Fantastic Pretraining Optimizers and Where to Find Them

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior studies overestimate the acceleration benefits of novel optimizers (e.g., Muon, Soap) due to insufficient hyperparameter tuning, reliance on single evaluation metrics (e.g., mid-training checkpoints), and neglect of late-training performance. Method: The authors conduct a systematic pretraining evaluation of 10 optimizers across four model scales (10M–1.2B parameters) and diverse data-to-model ratios. They advocate three fairness conditions: exhaustive hyperparameter search, multi-scale model validation, and full-training dynamic assessment—including learning rate decay analysis. Contribution/Results: Matrix preconditioning optimizers accelerate training substantially on small models but yield only 1.1× speedup over AdamW on 1.2B-parameter models. Mid-training checkpoint evaluation frequently leads to misleading conclusions. This work establishes a new empirical benchmark and methodological standard for optimizer evaluation, emphasizing comprehensive, scalable, and temporally resolved assessment protocols.

Technology Category

Application Category

📝 Abstract
AdamW has long been the dominant optimizer in language model pretraining, despite numerous claims that alternative optimizers offer 1.4 to 2x speedup. We posit that two methodological shortcomings have obscured fair comparisons and hindered practical adoption: (i) unequal hyperparameter tuning and (ii) limited or misleading evaluation setups. To address these two issues, we conduct a systematic study of ten deep learning optimizers across four model scales (0.1B-1.2B parameters) and data-to-model ratios (1-8x the Chinchilla optimum). We find that fair and informative comparisons require rigorous hyperparameter tuning and evaluations across a range of model scales and data-to-model ratios, performed at the end of training. First, optimal hyperparameters for one optimizer may be suboptimal for another, making blind hyperparameter transfer unfair. Second, the actual speedup of many proposed optimizers over well-tuned baselines is lower than claimed and decreases with model size to only 1.1x for 1.2B parameter models. Thirdly, comparing intermediate checkpoints before reaching the target training budgets can be misleading, as rankings between two optimizers can flip during training due to learning rate decay. Through our thorough investigation, we find that all the fastest optimizers such as Muon and Soap, use matrices as preconditioners -- multiplying gradients with matrices rather than entry-wise scalars. However, the speedup of matrix-based optimizers is inversely proportional to model scale, decreasing from 1.4x over AdamW for 0.1B parameter models to merely 1.1x for 1.2B parameter models.
Problem

Research questions and friction points this paper is trying to address.

Evaluating optimizer performance fairly across model scales
Addressing misleading hyperparameter transfer in optimizer comparisons
Assessing actual speedup claims of alternative optimizers over AdamW
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic optimizer comparison across model scales
Rigorous hyperparameter tuning for fair evaluations
Matrix preconditioners outperform scalar methods
🔎 Similar Papers
No similar papers found.