Fast Catch-Up, Late Switching: Optimal Batch Size Scheduling via Functional Scaling Laws

📅 2026-02-15
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of designing optimal batch size schedules under a fixed data budget to balance optimization dynamics and computational efficiency. Building upon the function scaling law (FSL) framework and incorporating gradient noise forgetting dynamics, the study systematically investigates how task difficulty influences batch size scheduling. The analysis reveals that easy tasks benefit from steadily increasing batch sizes throughout training, whereas difficult tasks achieve better performance by switching to large batch sizes only in later stages—a strategy termed “late switching.” This approach leverages a “fast catch-up” mechanism to substantially reduce data consumption without compromising model performance. Empirical validation on dense and mixture-of-experts (MoE) large language models—trained with 1.1B parameters and 1T tokens—consistently demonstrates the superiority of the late-switching strategy over both constant batch size and early-switching baselines.

Technology Category

Application Category

📝 Abstract
Batch size scheduling (BSS) plays a critical role in large-scale deep learning training, influencing both optimization dynamics and computational efficiency. Yet, its theoretical foundations remain poorly understood. In this work, we show that the functional scaling law (FSL) framework introduced in Li et al. (2025a) provides a principled lens for analyzing BSS. Specifically, we characterize the optimal BSS under a fixed data budget and show that its structure depends sharply on task difficulty. For easy tasks, optimal schedules keep increasing batch size throughout. In contrast, for hard tasks, the optimal schedule maintains small batch sizes for most of training and switches to large batches only in a late stage. To explain the emergence of late switching, we uncover a dynamical mechanism -- the fast catch-up effect -- which also manifests in large language model (LLM) pretraining. After switching from small to large batches, the loss rapidly aligns with the constant large-batch trajectory. Using FSL, we show that this effect stems from rapid forgetting of accumulated gradient noise, with the catch-up speed determined by task difficulty. Crucially, this effect implies that large batches can be safely deferred to late training without sacrificing performance, while substantially reducing data consumption. Finally, extensive LLM pretraining experiments -- covering both Dense and MoE architectures with up to 1.1B parameters and 1T tokens -- validate our theoretical predictions. Across all settings, late-switch schedules consistently outperform constant-batch and early-switch baselines.
Problem

Research questions and friction points this paper is trying to address.

batch size scheduling
functional scaling laws
optimization dynamics
large language model pretraining
task difficulty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Functional Scaling Laws
Batch Size Scheduling
Fast Catch-Up Effect
Large Language Model Pretraining
Gradient Noise Forgetting
🔎 Similar Papers
No similar papers found.
Jinbo Wang
Jinbo Wang
Texas A & M University
Ocean dynamics
Binghui Li
Binghui Li
CMLR, Peking University
machine learningdeep learning theory
Zhanpeng Zhou
Zhanpeng Zhou
Shanghai Jiao Tong University
Deep Learning Theory
Mingze Wang
Mingze Wang
School of Mathematical Sciences, Peking University
Machine Learning TheoryDeep Learning TheoryOptimization
Y
Yuxuan Sun
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
J
Jiaqi Zhang
Meituan, Beijing
X
Xunliang Cai
Meituan, Beijing
L
Lei Wu
School of Mathematical Sciences, Peking University; Center for Machine Learning Research, Peking University; AI for Science Institute, Beijing