How Many Experts Are Enough? Towards Optimal Semantic Specialization for Mixture-of-Experts

📅 2025-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient semantic specialization among experts and blind scale configuration in sparse Mixture-of-Experts (MoE) models, this paper proposes MASS, an Adaptive Semantic Specialization framework. Methodologically, MASS introduces: (1) a gradient-driven semantic drift detection mechanism to dynamically identify expert functional overlap; (2) a token-level routing confidence–aware strategy for dynamic sparse routing and on-demand expert expansion; and (3) a differentiable expert expansion mechanism enabling end-to-end optimization of the expert pool size during training. On synthetic benchmarks, MASS precisely converges to the cost–performance Pareto frontier. In multilingual and vision multi-task settings, it significantly outperforms mainstream MoE baselines—achieving up to a 37% improvement in expert semantic differentiation—while maintaining cross-domain robustness and computational efficiency.

Technology Category

Application Category

📝 Abstract
Finding the optimal configuration of Sparse Mixture-ofExperts (SMoE) that maximizes semantic differentiation among experts is essential for exploiting the full potential of MoE architectures. However, existing SMoE frameworks either heavily rely on hyperparameter tuning or overlook the importance of diversifying semantic roles across experts when adapting the expert pool size. We propose Mixture-of-Experts for Adaptive Semantic Specialization (MASS), a semanticaware MoE framework for adaptive expert expansion and dynamic routing. MASS introduces two key advancements: (i) a gradient-based semantic drift detector that prompts targeted expert expansion when the existing expert pool lacks capacity to capture the full semantic diversity of the data, and (ii) an integration of adaptive routing strategy that dynamically adjusts expert usage based on token-level routing confidence mass. We first demonstrate that MASS reliably converges to the point of optimal balance between cost-performance trade-off with notably improved sematic specialization in a highly controlled synthetic setup. Further empirical results on real-world datasets across language and vision domains show that MASS consistently outperforms a range of strong MoE baselines, demonstrating its domain robustness and enhanced expert specialization.
Problem

Research questions and friction points this paper is trying to address.

Optimizes semantic specialization in Mixture-of-Experts architectures
Adapts expert pool size to capture full semantic data diversity
Dynamically routes tokens based on routing confidence for efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-based semantic drift detector for expert expansion
Adaptive routing strategy adjusting expert usage dynamically
Semantic-aware MoE framework for adaptive expert specialization
🔎 Similar Papers
No similar papers found.
S
Sumin Park
Korea Advanced Institute of Science and Technology (KAIST)
Noseong Park
Noseong Park
Tenured Associate Professor, KAIST
Artificial Intelligence