🤖 AI Summary
This work addresses the challenge of efficiently integrating linear sequence modeling (LSM)—including linear attention, state space models, and linear RNNs—with sparse Mixture-of-Experts (MoE) in large language models. We propose Linear-MoE, the first framework enabling deep architectural fusion of LSM and MoE. To support scalable training, we design a sequence-parallel training paradigm tailored for Linear-MoE, enabling flexible hybrid modeling with Transformer-MoE. Our system uniformly adopts three-dimensional parallelism across model, data, and sequence dimensions. Evaluated on models ranging from 0.3B to 7B parameters, Linear-MoE achieves substantially lower training overhead and linear-time inference complexity, while matching the accuracy of Transformer-MoE on major benchmarks. These results demonstrate the effectiveness and scalability of jointly optimizing linearized sequence modeling with sparse expert activation.
📝 Abstract
Linear Sequence Modeling (LSM) like linear attention, state space models and linear RNNs, and Mixture-of-Experts (MoE) have recently emerged as significant architectural improvements. In this paper, we introduce Linear-MoE, a production-level system for modeling and training large-scale models that integrate LSM with MoE. Linear-MoE leverages the advantages of both LSM modules for linear-complexity sequence modeling and MoE layers for sparsely activation, aiming to offer high performance with efficient training. The Linear-MoE system comprises: 1) Modeling subsystem, which provides a unified framework supporting all instances of LSM. and 2) Training subsystem, which facilitates efficient training by incorporating various advanced parallelism technologies, particularly Sequence Parallelism designed for Linear-MoE models. Additionally, we explore hybrid models that combine Linear-MoE layers with standard Transformer-MoE layers with its Sequence Parallelism to further enhance model flexibility and performance. Evaluations on two model series, A0.3B-2B and A1B-7B, demonstrate Linear-MoE achieves efficiency gains while maintaining competitive performance on various benchmarks, showcasing its potential as a next-generation foundational model architecture. Code: https://github.com/OpenSparseLLMs/Linear-MoE.