๐ค AI Summary
This work addresses the lack of effective methods to evaluate whether experts in sparse Mixture-of-Experts (MoE) models achieve non-redundant specialization, particularly in small-scale settings. To this end, the authors construct the first benchmark that accurately reflects large-scale routing behavior at a smaller scale, integrating multi-domain distinguishable data and an ideal reference router grounded in domain definitions. They systematically compare multiple routing strategies and introduce quantitative metrics to assess expert specialization. Their experiments reveal that a โbalanced routing regimeโ is crucial for achieving both high expert utilization and meaningful specialization, a finding they further validate on models up to 35 times larger, demonstrating strong scalability of their conclusions.
๐ Abstract
Sparse Mixture-of-Experts (MoE) architectures are increasingly popular for frontier large language models (LLM) but they introduce training challenges due to routing complexity. Fully leveraging parameters of an MoE model requires all experts to be well-trained and to specialize in non-redundant ways. Assessing this, however, is complicated due to lack of established metrics and, importantly, many routing techniques exhibit similar performance at smaller sizes, which is often not reflective of their behavior at large scale. To address this challenge, we propose the MoE Routing Testbed, a setup that gives clearer visibility into routing dynamics at small scale while using realistic data. The testbed pairs a data mix with clearly distinguishable domains with a reference router that prescribes ideal routing based on these domains, providing a well-defined upper bound for comparison. This enables quantifiable measurement of expert specialization. To demonstrate the value of the testbed, we compare various MoE routing approaches and show that balancing scope is the crucial factor that allows specialization while maintaining high expert utilization. We confirm that this observation generalizes to models 35x larger.