Spectral Condition for $μ$P under Width-Depth Scaling

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of unstable feature learning and difficult hyperparameter transfer in generative foundation models under joint width–depth scaling. The authors propose a unified spectral framework that establishes, for the first time, a spectral maximal update (μP) condition characterizing how weights and their updates scale with varying model width and depth, naturally subsuming existing μP approaches. By integrating spectral analysis with optimizer dynamics within a residual network architecture, they derive hyperparameter scaling laws applicable to a variety of optimizers. Experiments on GPT-2–style language models demonstrate that the proposed method enables stable feature learning and supports robust hyperparameter transfer across different model sizes.

Technology Category

Application Category

📝 Abstract
Generative foundation models are increasingly scaled in both width and depth, posing significant challenges for stable feature learning and reliable hyperparameter (HP) transfer across model sizes. While maximal update parameterization ($μ$P) has provided a principled solution to both problems for width scaling, existing extensions to the joint width-depth scaling regime remain fragmented, architecture- and optimizer-specific, and often rely on technically involved theories. In this work, we develop a simple and unified spectral framework for $μ$P under joint width-depth scaling. Considering residual networks of varying block depths, we first introduce a spectral $μ$P condition that precisely characterizes how the norms of weights and their per-step updates should scale with width and depth, unifying previously disparate $μ$P formulations as special cases. Building on this condition, we then derive a general recipe for implementing $μ$P across a broad class of optimizers by mapping the spectral constraints to concrete HP parameterizations. This approach not only recovers existing $μ$P formulations (e.g., for SGD and AdamW) but also naturally extends to a wider range of optimizers. Finally, experiments on GPT-2 style language models demonstrate that the proposed spectral $μ$P condition preserves stable feature learning and enables robust HP transfer under width-depth scaling.
Problem

Research questions and friction points this paper is trying to address.

width-depth scaling
feature learning
hyperparameter transfer
μP
generative foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

spectral condition
maximal update parameterization
width-depth scaling
hyperparameter transfer
residual networks
🔎 Similar Papers
No similar papers found.