🤖 AI Summary
To address insufficient semantic specialization among experts and blind scale configuration in sparse Mixture-of-Experts (MoE) models, this paper proposes MASS, an Adaptive Semantic Specialization framework. Methodologically, MASS introduces: (1) a gradient-driven semantic drift detection mechanism to dynamically identify expert functional overlap; (2) a token-level routing confidence–aware strategy for dynamic sparse routing and on-demand expert expansion; and (3) a differentiable expert expansion mechanism enabling end-to-end optimization of the expert pool size during training. On synthetic benchmarks, MASS precisely converges to the cost–performance Pareto frontier. In multilingual and vision multi-task settings, it significantly outperforms mainstream MoE baselines—achieving up to a 37% improvement in expert semantic differentiation—while maintaining cross-domain robustness and computational efficiency.
📝 Abstract
Finding the optimal configuration of Sparse Mixture-ofExperts (SMoE) that maximizes semantic differentiation among experts is essential for exploiting the full potential of MoE architectures. However, existing SMoE frameworks either heavily rely on hyperparameter tuning or overlook the importance of diversifying semantic roles across experts when adapting the expert pool size. We propose Mixture-of-Experts for Adaptive Semantic Specialization (MASS), a semanticaware MoE framework for adaptive expert expansion and dynamic routing. MASS introduces two key advancements: (i) a gradient-based semantic drift detector that prompts targeted expert expansion when the existing expert pool lacks capacity to capture the full semantic diversity of the data, and (ii) an integration of adaptive routing strategy that dynamically adjusts expert usage based on token-level routing confidence mass. We first demonstrate that MASS reliably converges to the point of optimal balance between cost-performance trade-off with notably improved sematic specialization in a highly controlled synthetic setup. Further empirical results on real-world datasets across language and vision domains show that MASS consistently outperforms a range of strong MoE baselines, demonstrating its domain robustness and enhanced expert specialization.