Expert Threshold Routing for Autoregressive Language Modeling with Dynamic Computation Allocation and Load Balancing

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of traditional Token-choice Mixture-of-Experts (TC-MoE) in autoregressive language modeling, particularly its rigid computation allocation and reliance on auxiliary losses to maintain load balancing. The authors propose an Expert Threshold (ET) routing mechanism, wherein each expert maintains an exponential moving average (EMA) threshold derived from the global token distribution, and tokens are independently routed based on whether their routing scores exceed this threshold. This approach achieves fully causal routing without auxiliary losses for the first time, naturally preserving load balance while enabling dynamic computation allocation. Evaluated on a 2.4B-parameter model pretrained on FineWeb-Edu, ET routing reduces cross-entropy loss by 0.067 compared to TC-MoE, equivalent to achieving the same performance with 1.6× fewer training tokens.

Technology Category

Application Category

📝 Abstract
Token-choice Mixture-of-Experts (TC-MoE) routes each token to a fixed number of experts, limiting dynamic computation allocation and requiring auxiliary losses to maintain load balance. We propose Expert Threshold (ET) routing, where each expert maintains an exponential moving average (EMA) threshold estimated from the global token distribution. At both training and inference, each token is independently routed to an expert if its score exceeds the expert's threshold, enabling dynamic computation allocation while achieving load balance without auxiliary losses. This fully causal mechanism eliminates dependence on other tokens in the batch, making it well-suited for autoregressive language modeling. In pretraining experiments scaling to 2.4B parameters on FineWeb-Edu, ET achieves 0.067 lower cross-entropy loss than TC-MoE, equivalent to reaching the same performance with 1.6$\times$ fewer tokens.
Problem

Research questions and friction points this paper is trying to address.

Mixture-of-Experts
autoregressive language modeling
dynamic computation allocation
load balancing
token routing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expert Threshold Routing
Dynamic Computation Allocation
Load Balancing
Mixture-of-Experts
Autoregressive Language Modeling
🔎 Similar Papers
No similar papers found.
H
Hanchi Sun
Computer Science and Engineering, Lehigh University, Bethlehem, PA, USA
Y
Yixin Liu
Computer Science and Engineering, Lehigh University, Bethlehem, PA, USA
Yonghui Wu
Yonghui Wu
Associate Professor, University of Florida
Natural Language ProcessingMachine LearningMedical InformaticsPharmacovigilance
L
Lichao Sun
Computer Science and Engineering, Lehigh University, Bethlehem, PA, USA