$μ$-Parametrization for Mixture of Experts

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of theoretical guarantees for feature learning in the router and expert modules of Mixture-of-Experts (MoE) models across varying model widths. We systematically introduce μ-parameterization (μP) into MoE architectures for the first time, deriving a MoE-specific μ-parameterization scheme grounded in μTransfer theory. This ensures consistent feature learning behavior across widths for both routing and expert layers. Furthermore, we theoretically analyze and empirically validate how the number of experts and granularity scaling affect optimal learning rates, thereby establishing a scalable learning rate adaptation strategy. Extensive experiments on MoE models of multiple scales demonstrate that our approach significantly improves training stability and convergence efficiency, enabling robust and efficient large-model scaling.

Technology Category

Application Category

📝 Abstract
Recent years have seen a growing interest and adoption of LLMs, with $μ$Transfer becoming a key technique for tuning hyperparameters in large-scale training. Meanwhile, Mixture-of-Experts (MoE) has emerged as a leading architecture in extremely large models. However, the intersection of these two advancements has remained unexplored. In this work, we derive a $μ$-Parameterization ($μ$P) for MoE, providing theoretical guarantees for feature learning across model widths in both the router and experts. We empirically validate our parameterization and further investigate how scaling the number of experts and granularity affects the optimal learning rate.
Problem

Research questions and friction points this paper is trying to address.

Develop μ-Parameterization for Mixture of Experts
Ensure feature learning across model widths
Study expert scaling impact on learning rate
Innovation

Methods, ideas, or system contributions that make the work stand out.

μ-Parameterization for Mixture of Experts
Theoretical guarantees for feature learning
Empirical validation of optimal learning rates
🔎 Similar Papers
No similar papers found.