Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that hyperparameters of existing optimizers are highly sensitive to model scale, hindering cross-scale transferability and leading to prohibitive tuning costs in large-scale training. The authors propose a novel framework grounded in spectral conditioning, which for the first time extends μP (maximal update parameterization) to mainstream optimizers—including AdamW, ADOPT, LAMB, Sophia, Shampoo, and Muon—thereby overcoming limitations inherent in traditional tensor program approaches. By integrating spectral conditioning analysis with μP parameterization, the method enables zero-shot learning rate transfer across model widths. Extensive experiments across multiple benchmark architectures validate the effectiveness of this approach and provide both theoretical and empirical foundations for hyperparameter transfer under deep scaling regimes.

Technology Category

Application Category

📝 Abstract
Several variations of adaptive first-order and second-order optimization methods have been proposed to accelerate and scale the training of large language models. The performance of these optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia, Shampoo and Muon. We implement our $\mu$P derivations on multiple benchmark models and demonstrate zero-shot learning rate transfer across increasing model width for the above optimizers. Further, we provide empirical insights into depth-scaling parameterization for these optimizers.
Problem

Research questions and friction points this paper is trying to address.

maximal update parameterization
optimizer
hyperparameter transfer
spectral conditions
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

spectral conditions
μP
optimizer transfer
zero-shot learning rate transfer
large language model scaling
🔎 Similar Papers
No similar papers found.