DualSparse-MoE: Coordinating Tensor/Neuron-Level Sparsity with Expert Partition and Reconstruction

๐Ÿ“… 2025-08-25
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
MoE architectures enable efficient scaling of large language models but suffer from high computational overhead and unpredictable activation patterns, hindering inference efficiency. To address this, we propose DualSparse-MoEโ€”a post-training inference optimization method that requires no retraining. It is the first approach to jointly introduce tensor-level dynamic pruning and neuron-level static sparse reconstruction during post-training, while incorporating a load-balancing-aware expert parallelism strategy to ensure mathematically consistent dual sparsity. Evaluated on three mainstream MoE models, DualSparse-MoE reduces computational cost by 25% on average, with only a marginal accuracy drop of 0.08%โ€“0.28%. It achieves up to 1.41ร— module-level inference speedup, significantly improving deployment efficiency and the accuracyโ€“cost trade-off.

Technology Category

Application Category

๐Ÿ“ Abstract
Mixture of Experts (MoE) has become a mainstream architecture for building Large Language Models (LLMs) by reducing per-token computation while enabling model scaling. It can be viewed as partitioning a large Feed-Forward Network (FFN) at the tensor level into fine-grained sub-FFNs, or experts, and activating only a sparse subset for each input. While this sparsity improves efficiency, MoE still faces substantial challenges due to their massive computational scale and unpredictable activation patterns. To enable efficient MoE deployment, we identify dual sparsity at the tensor and neuron levels in pre-trained MoE modules as a key factor for both accuracy and efficiency. Unlike prior work that increases tensor-level sparsity through finer-grained expert design during pre-training, we introduce post-training expert partitioning to induce such sparsity without retraining. This preserves the mathematical consistency of model transformations and enhances both efficiency and accuracy in subsequent fine-tuning and inference. Building upon this, we propose DualSparse-MoE, an inference system that integrates dynamic tensor-level computation dropping with static neuron-level reconstruction to deliver significant efficiency gains with minimal accuracy loss. Experimental results show that enforcing an approximate 25% drop rate with our approach reduces average accuracy by only 0.08%-0.28% across three prevailing MoE models, while nearly all degrees of computation dropping consistently yield proportional computational speedups. Furthermore, incorporating load-imbalance awareness into expert parallelism achieves a 1.41x MoE module speedup with just 0.5% average accuracy degradation.
Problem

Research questions and friction points this paper is trying to address.

Enhancing MoE efficiency via tensor and neuron sparsity coordination
Reducing computation without retraining through expert partitioning
Achieving speedups with minimal accuracy loss in inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-training expert partitioning without retraining
Dynamic tensor-level computation dropping
Static neuron-level reconstruction for efficiency
๐Ÿ”Ž Similar Papers
No similar papers found.