MossNet: Mixture of State-Space Experts is a Multi-Head Attention

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing state space models (SSMs) typically implement only single-head attention, limiting their representational capacity. This paper proposes MossNet—the first SSM architecture to integrate a Mixture-of-Experts (MoE) mechanism into a temporal hybrid kernel—enabling equivalent linear multi-head attention via expert collaboration and overcoming the single-head modeling bottleneck. Its core innovation is a synergistic SSM-MoE design that jointly incorporates channel-mixing MLPs and linear attention approximation, preserving sequence modeling efficiency while substantially enhancing representation learning. Experiments demonstrate that MossNet outperforms mainstream Transformer and SSM baselines at comparable parameter counts. Moreover, its large-scale variant, trained on over one trillion tokens, exhibits strong scalability, achieving faster inference speed and lower GPU memory consumption than competitive models.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have significantly advanced generative applications in natural language processing (NLP). Recent trends in model architectures revolve around efficient variants of transformers or state-space/gated-recurrent models (SSMs, GRMs). However, prevailing SSM/GRM-based methods often emulate only a single attention head, potentially limiting their expressiveness. In this work, we propose MossNet, a novel mixture-of-state-space-experts architecture that emulates a linear multi-head attention (MHA). MossNet leverages a mixture-of-experts (MoE) implementation not only in channel-mixing multi-layered perceptron (MLP) blocks but also in the time-mixing SSM kernels to realize multiple "attention heads." Extensive experiments on language modeling and downstream evaluations show that MossNet outperforms both transformer- and SSM-based architectures of similar model size and data budgets. Larger variants of MossNet, trained on trillions of tokens, further confirm its scalability and superior performance. In addition, real-device profiling on a Samsung Galaxy S24 Ultra and an Nvidia A100 GPU demonstrate favorable runtime speed and resource usage compared to similarly sized baselines. Our results suggest that MossNet is a compelling new direction for efficient, high-performing recurrent LLM architectures.
Problem

Research questions and friction points this paper is trying to address.

Emulating multi-head attention with state-space experts
Overcoming single-head limitation in SSM/GRM architectures
Improving efficiency and performance of recurrent LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of state-space experts emulates multi-head attention
MoE implementation in both MLP blocks and SSM kernels
Outperforms transformer and SSM architectures with similar resources
🔎 Similar Papers
No similar papers found.