🤖 AI Summary
This work addresses the limitations of existing Mixture-of-Experts (MoE) models, whose Top-k+Softmax routing mechanism is non-differentiable and couples expert selection with contribution assignment, thereby constraining both performance and scalability. To overcome this, we propose DirMoE, the first framework that decouples these two processes within a Dirichlet variational autoencoder formulation: expert selection is modeled via Bernoulli distributions, while contribution allocation is governed by Dirichlet distributions. End-to-end differentiability is achieved through Gumbel-Sigmoid relaxation and implicit reparameterization. By integrating an evidence lower bound (ELBO) objective with a hyperparameter scheduling strategy, DirMoE precisely controls the number of activated experts, significantly enhancing expert specialization while matching or surpassing the performance of current methods.
📝 Abstract
Mixture-of-Experts (MoE) models have demonstrated exceptional performance in large-scale language models. Existing routers typically rely on non-differentiable Top-$k$+Softmax, limiting their performance and scalability. We argue that two distinct decisions, which experts to activate and how to distribute expert contributions among them, are conflated in standard Top-$k$+Softmax. We introduce Dirichlet-Routed MoE (DirMoE), a novel end-to-end differentiable routing mechanism built on a Dirichlet variational autoencoder framework. This design fundamentally disentangles the core routing problems: expert selection, modeled by a Bernoulli component, and expert contribution among chosen experts, handled by a Dirichlet component. The entire forward pass remains fully differentiable through the use of Gumbel-Sigmoid relaxation for the expert selection and implicit reparameterization for the Dirichlet distribution. Our training objective, a variational ELBO, includes a direct sparsity penalty that precisely controls the number of active experts in expectation, alongside a schedule for key hyperparameters that guides the model from an exploratory to a definitive routing state. Moreover, our DirMoE router matches or exceeds other methods while improving expert specialization.