DirMoE: Dirichlet-routed Mixture of Experts

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing Mixture-of-Experts (MoE) models, whose Top-k+Softmax routing mechanism is non-differentiable and couples expert selection with contribution assignment, thereby constraining both performance and scalability. To overcome this, we propose DirMoE, the first framework that decouples these two processes within a Dirichlet variational autoencoder formulation: expert selection is modeled via Bernoulli distributions, while contribution allocation is governed by Dirichlet distributions. End-to-end differentiability is achieved through Gumbel-Sigmoid relaxation and implicit reparameterization. By integrating an evidence lower bound (ELBO) objective with a hyperparameter scheduling strategy, DirMoE precisely controls the number of activated experts, significantly enhancing expert specialization while matching or surpassing the performance of current methods.

Technology Category

Application Category

📝 Abstract
Mixture-of-Experts (MoE) models have demonstrated exceptional performance in large-scale language models. Existing routers typically rely on non-differentiable Top-$k$+Softmax, limiting their performance and scalability. We argue that two distinct decisions, which experts to activate and how to distribute expert contributions among them, are conflated in standard Top-$k$+Softmax. We introduce Dirichlet-Routed MoE (DirMoE), a novel end-to-end differentiable routing mechanism built on a Dirichlet variational autoencoder framework. This design fundamentally disentangles the core routing problems: expert selection, modeled by a Bernoulli component, and expert contribution among chosen experts, handled by a Dirichlet component. The entire forward pass remains fully differentiable through the use of Gumbel-Sigmoid relaxation for the expert selection and implicit reparameterization for the Dirichlet distribution. Our training objective, a variational ELBO, includes a direct sparsity penalty that precisely controls the number of active experts in expectation, alongside a schedule for key hyperparameters that guides the model from an exploratory to a definitive routing state. Moreover, our DirMoE router matches or exceeds other methods while improving expert specialization.
Problem

Research questions and friction points this paper is trying to address.

Mixture-of-Experts
routing mechanism
differentiability
expert selection
expert contribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dirichlet-routed MoE
differentiable routing
expert selection
expert contribution
variational ELBO
🔎 Similar Papers
No similar papers found.
A
Amirhossein Vahidi
Wellcome Sanger Institute, Wellcome Genome Campus, Cambridge, UK
H
Hesam Asadollahzadeh
Wellcome Sanger Institute, Wellcome Genome Campus, Cambridge, UK; School of Computing and Information Systems (CIS), Faculty of Engineering and IT (FEIT), University of Melbourne, Australia
N
Navid Akhavan Attar
School of Computing and Information Systems (CIS), Faculty of Engineering and IT (FEIT), University of Melbourne, Australia
M
Marie Moullet
Wellcome Sanger Institute, Wellcome Genome Campus, Cambridge, UK
K
Kevin Ly
Wellcome Sanger Institute, Wellcome Genome Campus, Cambridge, UK
Xingyi Yang
Xingyi Yang
Assistant Professor, The Hong Kong Polytechnic University
Machine LearningComputer VisionArtificial IntelligenceGenerative AI
M
Mohammad Lotfollahi
Wellcome Sanger Institute, Wellcome Genome Campus, Cambridge, UK