🤖 AI Summary
This study investigates whether identifiable domain-specialized experts exist within Mixture-of-Experts (MoE) architectures in large language models and addresses the lack of interpretability in their specialization mechanisms. By analyzing expert activation patterns across ten state-of-the-art MoE models—spanning 3.8B to 120B parameters—on multi-domain tasks, the work provides the first empirical evidence of domain-specific experts. Building on this insight, it introduces Domain Steering MoE (DSMoE), a training-free, zero-overhead inference-time intervention that employs a domain-guided routing strategy to steer existing MoE models without additional computational cost. Experiments demonstrate that DSMoE consistently outperforms strong baselines such as supervised fine-tuning across four open-source MoE models, improving performance on both target and non-target domains while incurring no extra inference overhead.
📝 Abstract
In the era of Large Language Models (LLMs), the Mixture of Experts (MoE) architecture has emerged as an effective approach for training extremely large models with improved computational efficiency. This success builds upon extensive prior research aimed at enhancing expert specialization in MoE-based LLMs. However, the nature of such specializations and how they can be systematically interpreted remain open research challenges. In this work, we investigate this gap by posing a fundamental question: \textit{Do domain-specific experts exist in MoE-based LLMs?} To answer the question, we evaluate ten advanced MoE-based LLMs ranging from 3.8B to 120B parameters and provide empirical evidence for the existence of domain-specific experts. Building on this finding, we propose \textbf{Domain Steering Mixture of Experts (DSMoE)}, a training-free framework that introduces zero additional inference cost and outperforms both well-trained MoE-based LLMs and strong baselines, including Supervised Fine-Tuning (SFT). Experiments on four advanced open-source MoE-based LLMs across both target and non-target domains demonstrate that our method achieves strong performance and robust generalization without increasing inference cost or requiring additional retraining. Our implementation is publicly available at https://github.com/giangdip2410/Domain-specific-Experts.