Where Does Warm-Up Come From? Adaptive Scheduling for Norm-Constrained Optimizers

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of conventional learning rate warmup strategies, which rely on heuristic hyperparameter tuning and lack theoretical grounding—particularly exhibiting instability under norm-constrained optimizers such as Muon and Lion. Building upon a generalized smoothness assumption that links local curvature to the suboptimality gap, the paper derives, for the first time, a learning rate schedule that naturally integrates both warmup and decay directly from convergence analysis. The resulting method is fully adaptive, requiring no additional hyperparameters and automatically adjusting warmup duration. Evaluated on LLaMA large language model pretraining, it consistently matches or surpasses the performance of manually tuned baselines across all experimental settings, significantly enhancing training efficiency and robustness.

Technology Category

Application Category

📝 Abstract
We study adaptive learning rate scheduling for norm-constrained optimizers (e.g., Muon and Lion). We introduce a generalized smoothness assumption under which local curvature decreases with the suboptimality gap and empirically verify that this behavior holds along optimization trajectories. Under this assumption, we establish convergence guarantees under an appropriate choice of learning rate, for which warm-up followed by decay arises naturally from the proof rather than being imposed heuristically. Building on this theory, we develop a practical learning rate scheduler that relies only on standard hyperparameters and adapts the warm-up duration automatically at the beginning of training. We evaluate this method on large language model pretraining with LLaMA architectures and show that our adaptive warm-up selection consistently outperforms or at least matches the best manually tuned warm-up schedules across all considered setups, without additional hyperparameter search. Our source code is available at https://github.com/brain-lab-research/llm-baselines/tree/warmup
Problem

Research questions and friction points this paper is trying to address.

warm-up
adaptive scheduling
norm-constrained optimizers
learning rate
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive learning rate scheduling
norm-constrained optimizers
warm-up
generalized smoothness
automatic warm-up duration
🔎 Similar Papers
No similar papers found.
Artem Riabinin
Artem Riabinin
PhD, KAUST
Optimization
Andrey Veprikov
Andrey Veprikov
Unknown affiliation
OptimizationMLDL
A
Arman Bolatov
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
Martin Takáč
Martin Takáč
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
optimizationmachine learningdeep neural networkbig datacomputer science
A
A. Beznosikov
Basic Research of Artificial Intelligence Laboratory (BRAIn Lab), Federated Learning Problems Laboratory, Innopolis University