🤖 AI Summary
This work addresses the limitations of conventional sparse mixture-of-experts (MoE) models, which employ independent routing at each layer, resulting in an excessively large path space and poor statistical efficiency that hinder the learning of stable expert routing structures. To overcome this, the authors propose Path-Constrained Mixture of Experts (PathMoE), a novel architecture that shares router parameters across layers to dramatically reduce the effective path space, thereby enhancing path consistency and structural learnability. Notably, PathMoE naturally induces token clustering according to linguistic functionality without requiring auxiliary load-balancing losses. Experiments demonstrate that PathMoE achieves lower perplexity, superior downstream task performance, and greater robustness to routing perturbations compared to standard MoE baselines, consistently across both 0.9B and 16B parameter scales.
📝 Abstract
Sparse Mixture-of-Experts (MoE) architectures enable efficient scaling by activating only a subset of parameters for each input. However, conventional MoE routing selects each layer's experts independently, creating N^L possible expert paths -- for N experts across L layers. This far exceeds typical training set sizes, leading to statistical inefficiency as the model may not learn meaningful structure over such a vast path space. To constrain it, we propose \pathmoe, which shares router parameters across consecutive layers. Experiments on 0.9B and 16B parameter models demonstrate consistent improvements on perplexity and downstream tasks over independent routing, while eliminating the need for auxiliary load balancing losses. Analysis reveals that tokens following the same path naturally cluster by linguistic function, with \pathmoe{} producing more concentrated groups, better cross-layer consistency, and greater robustness to routing perturbations. These results offer a new perspective for understanding MoE architectures through the lens of expert paths.