π€ AI Summary
Sparse Mixture-of-Experts (MoE) models suffer from unstable training, slow convergence, and degraded performance during pretraining due to sparse gradients in backpropagation. This work proposes Default MoE: while preserving forward-pass Top-K sparsity, it introduces an exponential moving average (EMA) of expert outputs as a default fallback value, enabling the router to receive dense gradient feedback over all experts during backpropagation. The method requires no architectural modifications, incurs no additional routing computation, and introduces zero extra parameters. Experiments demonstrate that Default MoE significantly outperforms standard Top-K MoE across multiple downstream tasks, achieving greater training stability, faster convergence, and negligible computational overhead. Its core innovation lies in being the first approach to realize a βforward-sparse, backward-denseβ gradient densification approximation mechanism.
π Abstract
Mixture of Experts (MoE) pretraining is more scalable than dense Transformer pretraining, because MoEs learn to route inputs to a sparse set of their feedforward parameters. However, this means that MoEs only receive a sparse backward update, leading to training instability and suboptimal performance. We present a lightweight approximation method that gives the MoE router a dense gradient update while continuing to sparsely activate its parameters. Our method, which we refer to as Default MoE, substitutes missing expert activations with default outputs consisting of an exponential moving average of expert outputs previously seen over the course of training. This allows the router to receive signals from every expert for each token, leading to significant improvements in training performance. Our Default MoE outperforms standard TopK routing in a variety of settings without requiring significant computational overhead. Code: https://github.com/vatsal0/default-moe.