Seesaw: Accelerating Training by Balancing Learning Rate and Batch Size Scheduling

πŸ“… 2025-10-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
The co-scheduling of learning rate and batch size during large-model training lacks theoretical foundations. Method: This paper proposes Seesawβ€”a principled dynamic batch-size scheduling framework grounded in a normalized SGD proxy model for Adam. It establishes, for the first time, the theoretical equivalence between learning-rate decay and batch-size growth under finite-sample settings. Contribution/Results: By jointly tuning learning-rate decay and batch-size growth, Seesaw preserves the loss evolution trajectory while substantially reducing the number of training steps. Experiments on models ranging from 150M to 600M parameters show that Seesaw achieves final accuracy comparable to cosine annealing, reduces wall-clock time by 36% at equal FLOPs, and approaches the theoretical optimum acceleration limit.

Technology Category

Application Category

πŸ“ Abstract
Increasing the batch size during training -- a ''batch ramp'' -- is a promising strategy to accelerate large language model pretraining. While for SGD, doubling the batch size can be equivalent to halving the learning rate, the optimal strategy for adaptive optimizers like Adam is less clear. As a result, any batch-ramp scheduling, if used at all, is typically tuned heuristically. This work develops a principled framework for batch-size scheduling and introduces Seesaw: whenever a standard scheduler would halve the learning rate, Seesaw instead multiplies it by $1/sqrt{2}$ and doubles the batch size, preserving loss dynamics while reducing serial steps. Theoretically, we provide, to our knowledge, the first finite-sample proof of equivalence between learning-rate decay and batch-size ramp-up for SGD on noisy linear regression, and we extend this equivalence to normalized SGD, a tractable proxy for Adam, under a variance-dominated regime observed in practice. Empirically, on 150M/300M/600M-parameter models trained at Chinchilla scale using a constant (critical) batch size, Seesaw matches cosine decay at equal FLOPs while reducing wall-clock time by $approx 36%$, approaching the theoretical limit implied by our analysis.
Problem

Research questions and friction points this paper is trying to address.

Accelerating large language model training through balanced batch-size scheduling
Establishing equivalence between learning-rate decay and batch-ramp for SGD
Reducing wall-clock training time while maintaining model performance quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Balances learning rate and batch size scheduling
Uses normalized SGD as proxy for Adam optimizer
Reduces wall-clock time while preserving loss dynamics
πŸ”Ž Similar Papers
No similar papers found.