Through the River: Understanding the Benefit of Schedule-Free Methods for Language Model Training

📅 2025-07-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional learning rate schedules (e.g., cosine decay) and techniques like weight stabilization decay (WSD) or explicit weight averaging suffer from poor scalability, manual hyperparameter tuning, or excessive memory overhead in large language model (LLM) training. Method: This paper proposes Schedule-Free optimization—a paradigm that eliminates explicit learning rate decay and auxiliary storage—by implicitly performing weight averaging through adaptive momentum updates. We further introduce SF-AdamW, a robust variant enhancing stability under large-batch training and noisy momentum dynamics. Contribution/Results: Theoretically, Schedule-Free is shown to implicitly traverse long-range “river-like” structures in the loss landscape. Empirically, it achieves significantly improved convergence efficiency and generalization across increasingly large training scales. The approach combines theoretical rigor, engineering simplicity, and strong scalability, establishing a new benchmark for efficient LLM training.

Technology Category

Application Category

📝 Abstract
As both model and dataset sizes continue to scale rapidly, conventional pretraining strategies with fixed compute budgets-such as cosine learning rate schedules-are increasingly inadequate for large-scale training. Recent alternatives, including warmup-stable-decay (WSD) schedules and weight averaging, offer greater flexibility. However, WSD relies on explicit decay phases to track progress, while weight averaging addresses this limitation at the cost of additional memory. In search of a more principled and scalable alternative, we revisit the Schedule-Free (SF) method [Defazio et al., 2024], which has shown strong empirical performance across diverse settings. We show that SF-AdamW effectively navigates the "river" structure of the loss landscape without decay phases or auxiliary averaging, making it particularly suitable for continuously scaling training workloads. To understand this behavior, we conduct a theoretical and empirical analysis of SF dynamics, revealing that it implicitly performs weight averaging without memory overhead. Guided by this analysis, we propose a refined variant of SF that improves robustness to momentum and performs better under large batch sizes, addressing key limitations of the original method. Together, these results establish SF as a practical, scalable, and theoretically grounded approach for language model training.
Problem

Research questions and friction points this paper is trying to address.

Fixed compute budgets limit large-scale language model training
Existing methods require decay phases or extra memory
Schedule-Free methods offer scalable, memory-efficient training without decay
Innovation

Methods, ideas, or system contributions that make the work stand out.

Schedule-Free method avoids decay phases
Implicit weight averaging without memory cost
Refined SF variant improves large batch robustness
🔎 Similar Papers
No similar papers found.