🤖 AI Summary
To address the excessive memory overhead of Mixture-of-Experts (MoE) models, this paper proposes an efficient compression method based on expert output merging. Unlike conventional parameter-aggregation paradigms, our approach uniquely models expert fusion in the output space: we introduce learnable low-rank reconstruction matrices into the forward pass to compress expert outputs. We formulate this as a mathematically grounded optimization problem, providing both theoretical guarantees and interpretability. The method is architecture-agnostic—compatible with diverse MoE forward structures—and imposes no constraints on expert count or routing mechanisms. Experiments across multiple MoE models (e.g., Switch Transformers, GLaM) demonstrate that, at identical compression ratios, our method consistently outperforms baseline approaches—including expert pruning, distillation, and parameter quantization—in both accuracy and inference efficiency. Notably, it achieves superior accuracy–efficiency trade-offs while preserving model functionality and scalability.
📝 Abstract
The Mixture-of-Experts (MoE) technique has proven to be a promising solution to efficiently scale the model size, which has been widely applied in recent LLM advancements. However, the substantial memory overhead of MoE models has made their compression an important research direction. In this work, we provide a theoretical analysis of expert merging, a recently proposed technique for compressing MoE models. Rather than interpreting expert merging from the conventional perspective of parameter aggregation, we approach it from the perspective of merging experts' outputs. Our key insight is that the merging process can be interpreted as inserting additional matrices into the forward computation, which naturally leads to an optimization formulation. Building on this analysis, we introduce MergeMoE, a method that leverages mathematical optimization to construct the compression matrices. We evaluate MergeMoE on multiple MoE models and show that our algorithm consistently outperforms the baselines with the same compression ratios.