Benchmarking Optimizers for Large Language Model Pretraining

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM pretraining lacks a standardized benchmark for optimizer evaluation, hindering reproducibility and fair comparison. This work introduces the first unified, systematic evaluation framework for optimization algorithms in large-language-model pretraining. We conduct controlled, cross-optimizer comparisons—including AdamW, Lion, and Adafactor—across diverse model scales, batch sizes, and training durations. Crucially, we ensure fairness and reproducibility through rigorous ablation of confounding variables and meticulous hyperparameter tuning. Our analysis uncovers fundamental trade-offs among convergence speed, training stability, and computational efficiency, yielding practical, scenario-aware optimizer selection guidelines. All code, hyperparameter configurations, and experimental results are publicly released to establish a reproducible benchmark and accelerate empirical optimizer research.

Technology Category

Application Category

📝 Abstract
The recent development of Large Language Models (LLMs) has been accompanied by an effervescence of novel ideas and methods to better optimize the loss of deep learning models. Claims from those methods are myriad: from faster convergence to removing reliance on certain hyperparameters. However, the diverse experimental protocols used to validate these claims make direct comparisons between methods challenging. This study presents a comprehensive evaluation of recent optimization techniques across standardized LLM pretraining scenarios, systematically varying model size, batch size, and training duration. Through careful tuning of each method, we provide guidance to practitioners on which optimizer is best suited for each scenario. For researchers, our work highlights promising directions for future optimization research. Finally, by releasing our code and making all experiments fully reproducible, we hope our efforts can help the development and rigorous benchmarking of future methods.
Problem

Research questions and friction points this paper is trying to address.

Evaluating optimizers for large language model pretraining performance
Comparing optimization methods under standardized experimental conditions
Identifying best optimizers across varying model and training configurations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive evaluation of optimization techniques
Systematic variation of model and training parameters
Releasing code for reproducible benchmarking
🔎 Similar Papers
No similar papers found.