๐ค AI Summary
Sparse Mixture-of-Experts (SMoE) models scale effectively but suffer from poor robustness to data distribution shifts (e.g., contamination) and suboptimal routing efficiency. To address this, we propose SymphonySMoEโa novel SMoE framework that, for the first time, incorporates social graph structure into expert interaction modeling. It introduces a lightweight, modular graph-augmented routing mechanism leveraging graph neural networks to explicitly capture inter-expert dependencies. Crucially, SymphonySMoE requires no modifications to underlying expert architectures and is plug-and-play compatible with XMoE and general-purpose language/vision models. Extensive experiments on language modeling and vision instruction tuning demonstrate consistent and significant gains over strong baselines. We validate its efficiency and scalability at 4.2B and 7.4B parameter scales, achieving improved robustness against data contamination and higher token routing accuracy.
๐ Abstract
Sparse Mixture of Experts (SMoE) has emerged as a promising solution to achieving unparalleled scalability in deep learning by decoupling model parameter count from computational cost. By activating only a small subset of parameters per sample, SMoE enables significant growth in model capacity while maintaining efficiency. However, SMoE struggles to adapt to distributional shifts, leading to reduced robustness under data contamination. In this work, we introduce SymphonySMoE, a novel family of SMoE that introduces a social graph to model interactions among experts. This graph-based structure enhances the token routing process, addressing the robustness challenges that are inherent in conventional SMoE designs. SymphonySMoE is lightweight, modular, and integrates seamlessly with existing SMoE-based models such as the XMoE and the Generalist Language Model. We provide both theoretical analysis and empirical evidence demonstrating SymphonySMoE's advantages over baseline SMoE. Extensive experiments on language modeling and visual instruction tuning validate our method's effectiveness. We further highlight the scalability of SymphonySMoE to models with 4.2 and 7.4 billion parameters, showcasing its applicability in fine-tuning tasks for large-scale systems.