🤖 AI Summary
This work investigates the impact of batch size on the optimization performance of momentum-based stochastic conditional gradient methods, such as Scion, under a fixed computational budget. Leveraging the μ-Kurdyka–Łojasiewicz condition, the study theoretically characterizes the dynamic interplay among batch size, step size, and stochastic noise, revealing a piecewise behavior: performance initially improves with larger batches but eventually saturates or even degrades beyond a critical threshold. Building on this insight, the authors propose an adaptive scheduling strategy for batch size and sequence length that comes with convergence guarantees. Theoretical predictions align closely with empirical results from NanoGPT experiments, validating both the optimal step size scaling and the identified batch-size threshold, thereby offering practical guidance for efficient large-scale training.
📝 Abstract
We study the role of batch size in stochastic conditional gradient methods under a $μ$-Kurdyka-Łojasiewicz ($μ$-KL) condition. Focusing on momentum-based stochastic conditional gradient algorithms (e.g., Scion), we derive a new analysis that explicitly captures the interaction between stepsize, batch size, and stochastic noise. Our study reveals a regime-dependent behavior: increasing the batch size initially improves optimization accuracy but, beyond a critical threshold, the benefits saturate and can eventually degrade performance under a fixed token budget. Notably, the theory predicts the magnitude of the optimal stepsize and aligns well with empirical practices observed in large-scale training. Leveraging these insights, we derive principled guidelines for selecting the batch size and stepsize, and propose an adaptive strategy that increases batch size and sequence length during training while preserving convergence guarantees. Experiments on NanoGPT are consistent with the theoretical predictions and illustrate the emergence of the predicted scaling regimes. Overall, our results provide a theoretical framework for understanding batch size scaling in stochastic conditional gradient methods and offer guidance for designing efficient training schedules in large-scale optimization.