Maximum Score Routing For Mixture-of-Experts

πŸ“… 2025-08-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In sparsely activated Mixture-of-Experts (MoE) models, conventional top-k routing enforces fixed expert capacity constraints, leading to token dropping or inefficient padding and thus degrading hardware utilization; removing such constraints, however, causes severe load imbalance and reduced computational efficiency. To address this, we propose MaxScore routingβ€”a novel mechanism that formulates token assignment as a minimum-cost maximum-flow problem. By integrating a differentiable SoftTopk operator with graph-based flow optimization, MaxScore achieves dynamic load balancing without explicit capacity limits, avoiding the limitations of iterative rerouting and optimal transport while preserving both differentiability and global optimality. Experiments demonstrate that, at identical FLOPs, models trained with MaxScore achieve lower training loss and higher downstream task performance, significantly improving computational efficiency and overall model effectiveness.

Technology Category

Application Category

πŸ“ Abstract
Routing networks in sparsely activated mixture-of-experts (MoE) dynamically allocate input tokens to top-k experts through differentiable sparse transformations, enabling scalable model capacity while preserving computational efficiency. Traditional MoE networks impose an expert capacity constraint to ensure GPU-friendly computation. However, this leads to token dropping when capacity is saturated and results in low hardware efficiency due to padding in underutilized experts. Removing the capacity constraint, in turn, compromises load balancing and computational efficiency. To address these issues, we propose Maximum Score Routing ($mathbf{MaxScore}$), a novel MoE routing paradigm that models routing as a minimum-cost maximum-flow problem and integrates a SoftTopk operator. MaxScore resolves the fundamental limitations of iterative rerouting and optimal transport formulations, achieving lower training losses and higher evaluation scores at equivalent FLOPs compared to both constrained and unconstrained baselines. Implementation details and experimental configurations can be obtained from $href{https://github.com/dongbw18/MaxScore.git}{MaxScore}$.
Problem

Research questions and friction points this paper is trying to address.

Addresses token dropping in MoE due to expert capacity constraints
Improves hardware efficiency by reducing padding in underutilized experts
Balances load and computation without iterative rerouting limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Models routing as minimum-cost maximum-flow problem
Integrates SoftTopk operator for efficient token allocation
Achieves better performance without capacity constraints
πŸ”Ž Similar Papers
No similar papers found.
B
Bowen Dong
Tsinghua University
Y
Yilong Fan
Tianjin University
Yutao Sun
Yutao Sun
Tsinghua University
Natural Language ProcessingMachine Learning
Z
Zhenyu Li
Tsinghua University
T
Tengyu Pan
Tsinghua University
Xun Zhou
Xun Zhou
Professor of Computer Science, Harbin Institute of Technology, Shenzhen (HIT-SZ)
Big data analyticsSpatial databaseSpatial Data MiningGISmachine learning
J
Jianyong Wang
Tsinghua University