What do near-optimal learning rate schedules look like?

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the prevailing reliance on heuristic-based learning rate scheduling strategies, which lack a systematic understanding of optimal schedule shapes. The authors propose a method that decouples the base learning rate from the schedule shape, enabling automatic discovery of near-optimal learning rate schedules within a parameterized family tailored to specific tasks. Experiments across image classification, language modeling, and linear regression reveal that warmup followed by decay constitutes a robust characteristic of high-performing schedules, that commonly used schedule families are often suboptimal, and that weight decay significantly influences the optimal schedule shape. This study provides the first systematic characterization of universal properties underlying near-optimal learning rate schedules and establishes a clear connection between schedule morphology and optimization hyperparameters.

Technology Category

Application Category

📝 Abstract
A basic unanswered question in neural network training is: what is the best learning rate schedule shape for a given workload? The choice of learning rate schedule is a key factor in the success or failure of the training process, but beyond having some kind of warmup and decay, there is no consensus on what makes a good schedule shape. To answer this question, we designed a search procedure to find the best shapes within a parameterized schedule family. Our approach factors out the schedule shape from the base learning rate, which otherwise would dominate cross-schedule comparisons. We applied our search procedure to a variety of schedule families on three workloads: linear regression, image classification on CIFAR-10, and small-scale language modeling on Wikitext103. We showed that our search procedure indeed generally found near-optimal schedules. We found that warmup and decay are robust features of good schedules, and that commonly used schedule families are not optimal on these workloads. Finally, we explored how the outputs of our shape search depend on other optimization hyperparameters, and found that weight decay can have a strong effect on the optimal schedule shape. To the best of our knowledge, our results represent the most comprehensive results on near-optimal schedule shapes for deep neural network training, to date.
Problem

Research questions and friction points this paper is trying to address.

learning rate schedule
neural network training
schedule shape
optimization
hyperparameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

learning rate schedule
schedule shape optimization
hyperparameter search
weight decay interaction
neural network training
🔎 Similar Papers
No similar papers found.