🤖 AI Summary
This work addresses a critical gap in existing literature by providing the first comprehensive survey of sparse Mixture-of-Experts (MoE) models, systematically integrating their algorithmic foundations, decentralized architectures, and applications in vertical domains. It thoroughly examines core mechanisms such as routing strategies and expert network design, while further extending the discussion to decentralized deployment paradigms and adaptation methods for cross-modal and domain-specific scenarios. By synthesizing recent advances across these dimensions, this survey fills a notable void in the current body of review literature and offers an authoritative reference for researchers and practitioners aiming to develop efficient, scalable large models grounded in sparse MoE principles.
📝 Abstract
The sparse Mixture of Experts (MoE) architecture has evolved as a powerful approach for scaling deep learning models to more parameters with comparable computation cost. As an important branch of large language model (LLM), MoE model only activate a subset of experts based on a routing network. This sparse conditional computation mechanism significantly improves computational efficiency, paving a promising path for greater scalability and cost-efficiency. It not only enhance downstream applications such as natural language processing, computer vision, and multimodal in various horizontal domains, but also exhibit broad applicability across vertical domains including medical diagnosis, autonomous driving, financial analysis, and business intelligence. Despite the growing popularity and application of MoE models across various domains, there lacks a systematic exploration of recent advancements of MoE in many important fields. Existing surveys on MoE suffer from limitations such as lack coverage or not extensively exploration of key areas. This survey seeks to fill these gaps. In this paper, Firstly, we examine the foundational principles of MoE, with an in-depth exploration of its core components—the routing network and expert network. Subsequently, we extend beyond the centralized paradigm to the decentralized paradigm, which unlocks the immense untapped potential of decentralized infrastructure, enables democratization of MoE development for broader communities, and delivers greater scalability and cost-efficiency. Furthermore we focus on exploring its vertical domain applications. Finally, we also identify key challenges and promising future research directions. To the best of our knowledge, this survey is currently the most comprehensive review in the field of MoE. We aim for this article to serve as a valuable resource for both researchers and practitioners, enabling them to navigate and stay up-to-date with the latest advancements.