Time Transfer: On Optimal Learning Rate and Batch Size In The Infinite Data Limit

📅 2024-10-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
How should learning rate η and batch size B jointly scale with token budget T in the infinite-data limit for large language models? Method: Building on the microparameterization (μP) framework, we combine theoretical analysis with large-scale empirical measurements to derive a T-driven optimal scaling law. Contribution/Results: We discover, for the first time, that the critical batch size scales as B_crit ∝ T—challenging the conventional view that B_crit is determined solely by loss curvature. We prove that the optimal η decays with increasing T, while loss sensitivity to η remains constant under μP. This yields an optimal dynamic η–B trajectory wherein batch size must adapt throughout training. The scaling law holds robustly under μP, providing a unified theoretical foundation for co-scaling data, model size, and computational resources.

Technology Category

Application Category

📝 Abstract
One of the main challenges in optimal scaling of large language models (LLMs) is the prohibitive cost of hyperparameter tuning, particularly learning rate $eta$ and batch size $B$. While techniques like $mu$P (Yang et al., 2022) provide scaling rules for optimal $eta$ transfer in the infinite model size limit, the optimal scaling behavior in the infinite data size limit remains unknown. We fill in this gap by observing for the first time an intricate dependence of optimal $eta$ scaling on the pretraining token budget $T$, $B$ and its relation to the critical batch size $B_mathrm{crit}$, which we measure to evolve as $B_mathrm{crit} propto T$. Furthermore, we show that the optimal batch size is positively correlated with $B_mathrm{crit}$: keeping it fixed becomes suboptimal over time even if learning rate is scaled optimally. Surprisingly, our results demonstrate that the observed optimal $eta$ and $B$ dynamics are preserved with $mu$P model scaling, challenging the conventional view of $B_mathrm{crit}$ dependence solely on loss value. Complementing optimality, we examine the sensitivity of loss to changes in learning rate, where we find the sensitivity to decrease with increase of $T$ and to remain constant with $mu$P model scaling. We hope our results make the first step towards a unified picture of the joint optimal data and model scaling.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Learning Rate Adjustment
Batch Size Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Learning Rate Optimization
Batch Size Adaptation
🔎 Similar Papers
No similar papers found.