Hyperparameter Transfer Enables Consistent Gains of Matrix-Preconditioned Optimizers Across Scales

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates hyperparameter transferability of matrix preconditioning optimizers (e.g., Shampoo, SOAP, Muon) across model scales (190M–1.4B parameters) in large language models. Addressing the limitation that μP-guided learning rate scaling remains susceptible to finite-width bias, we propose a joint correction combining block-wise parameter grouping and explicit spectral normalization. We further validate the universality of the 1/width weight decay scaling for computationally optimal scaling. Experiments on Llama architectures show that Muon and Shampoo achieve 1.4× and 1.3× training speedup over AdamW, respectively—yet their gains degrade rapidly under incorrect scaling, underscoring the critical role of our method in ensuring cross-scale stability. The results demonstrate that proper spectral-aware preconditioning and theoretically grounded regularization jointly enable robust, scalable optimization without performance collapse across model sizes.

Technology Category

Application Category

📝 Abstract
Several recently introduced deep learning optimizers utilizing matrix-level preconditioning have shown promising speedups relative to the current dominant optimizer AdamW, particularly in relatively small-scale experiments. However, efforts to validate and replicate their successes have reported mixed results. To better understand the effectiveness of these optimizers at scale, in this work we investigate how to scale preconditioned optimizers via hyperparameter transfer, building on prior works such as $μ$P. We study how the optimal learning rate and weight decay should scale with model width and depth for a wide range of optimizers, including Shampoo, SOAP, and Muon, accounting for the impact of commonly used techniques such as blocking and grafting. We find that scaling the learning rate according to $μ$P improves transfer, but can still suffer from significant finite-width deviations that cause drifting optimal learning rates, which we show can be mitigated by blocking and explicit spectral normalization. For compute-optimal scaling, we find scaling independent weight decay as $1/mathrm{width}$ is nearly optimal across optimizers. Applying these scaling rules, we show Muon and Shampoo consistently achieve $1.4 imes$ and $1.3 imes$ speedup over AdamW for training Llama-architecture language models of sizes ranging from $190$M to $1.4$B, whereas the speedup vanishes rapidly with scale under incorrect scaling. Based on these results and further ablations, we argue that studying optimal hyperparameter transfer is essential for reliably comparing optimizers at scale given a realistic tuning budget.
Problem

Research questions and friction points this paper is trying to address.

Scaling matrix-preconditioned optimizers effectively across model sizes.
Determining optimal learning rate and weight decay scaling rules.
Ensuring consistent optimizer performance gains over AdamW at scale.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hyperparameter transfer scaling for preconditioned optimizers
Learning rate and weight decay scaling rules
Blocking and spectral normalization mitigate finite-width deviations
🔎 Similar Papers