🤖 AI Summary
This work addresses the limitations of conventional learning rate warmup strategies, which rely on heuristic hyperparameter tuning and lack theoretical grounding—particularly exhibiting instability under norm-constrained optimizers such as Muon and Lion. Building upon a generalized smoothness assumption that links local curvature to the suboptimality gap, the paper derives, for the first time, a learning rate schedule that naturally integrates both warmup and decay directly from convergence analysis. The resulting method is fully adaptive, requiring no additional hyperparameters and automatically adjusting warmup duration. Evaluated on LLaMA large language model pretraining, it consistently matches or surpasses the performance of manually tuned baselines across all experimental settings, significantly enhancing training efficiency and robustness.
📝 Abstract
We study adaptive learning rate scheduling for norm-constrained optimizers (e.g., Muon and Lion). We introduce a generalized smoothness assumption under which local curvature decreases with the suboptimality gap and empirically verify that this behavior holds along optimization trajectories. Under this assumption, we establish convergence guarantees under an appropriate choice of learning rate, for which warm-up followed by decay arises naturally from the proof rather than being imposed heuristically. Building on this theory, we develop a practical learning rate scheduler that relies only on standard hyperparameters and adapts the warm-up duration automatically at the beginning of training. We evaluate this method on large language model pretraining with LLaMA architectures and show that our adaptive warm-up selection consistently outperforms or at least matches the best manually tuned warm-up schedules across all considered setups, without additional hyperparameter search. Our source code is available at https://github.com/brain-lab-research/llm-baselines/tree/warmup