Dense Backpropagation Improves Training for Sparse Mixture-of-Experts

πŸ“… 2025-04-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Sparse Mixture-of-Experts (MoE) models suffer from unstable training, slow convergence, and degraded performance during pretraining due to sparse gradients in backpropagation. This work proposes Default MoE: while preserving forward-pass Top-K sparsity, it introduces an exponential moving average (EMA) of expert outputs as a default fallback value, enabling the router to receive dense gradient feedback over all experts during backpropagation. The method requires no architectural modifications, incurs no additional routing computation, and introduces zero extra parameters. Experiments demonstrate that Default MoE significantly outperforms standard Top-K MoE across multiple downstream tasks, achieving greater training stability, faster convergence, and negligible computational overhead. Its core innovation lies in being the first approach to realize a β€œforward-sparse, backward-dense” gradient densification approximation mechanism.

Technology Category

Application Category

πŸ“ Abstract
Mixture of Experts (MoE) pretraining is more scalable than dense Transformer pretraining, because MoEs learn to route inputs to a sparse set of their feedforward parameters. However, this means that MoEs only receive a sparse backward update, leading to training instability and suboptimal performance. We present a lightweight approximation method that gives the MoE router a dense gradient update while continuing to sparsely activate its parameters. Our method, which we refer to as Default MoE, substitutes missing expert activations with default outputs consisting of an exponential moving average of expert outputs previously seen over the course of training. This allows the router to receive signals from every expert for each token, leading to significant improvements in training performance. Our Default MoE outperforms standard TopK routing in a variety of settings without requiring significant computational overhead. Code: https://github.com/vatsal0/default-moe.
Problem

Research questions and friction points this paper is trying to address.

Improves sparse Mixture-of-Experts training stability
Provides dense gradient updates to MoE router
Enhances performance without computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dense gradient update for sparse MoE router
Substitutes missing expert activations with defaults
Improves training without computational overhead
πŸ”Ž Similar Papers
No similar papers found.