Expert Merging in Sparse Mixture of Experts with Nash Bargaining

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sparse Mixture-of-Experts (SMoE) models lack principled, theoretically grounded weighting mechanisms for expert fusion. Method: This paper proposes a game-theoretic expert collaboration framework, wherein the Nash bargaining solution is employed to formally model cooperative and competitive interactions among experts, yielding an interpretable, dynamically adaptive weighting strategy; further integrated with a complex-domain momentum optimizer possessing guaranteed convergence, enhancing both parameter fusion efficiency and stability. Contribution/Results: The method is architecture-agnostic—compatible with mainstream MoE designs without modifying base model structures. Experiments demonstrate significant improvements over existing fusion approaches across language modeling, text classification, and robustness benchmarks. It has been successfully deployed on Qwen1.5-MoE (14B) and DeepSeek-MoE (16B), achieving state-of-the-art performance in both zero-shot and fine-tuning settings.

Technology Category

Application Category

📝 Abstract
Existing expert merging strategies for Sparse Mixture of Experts (SMoE) typically rely on input-dependent or input-independent averaging of expert parameters, but often lack a principled weighting mechanism. In this work, we reinterpret expert merging through the lens of game theory, revealing cooperative and competitive dynamics among experts. Based on this perspective, we introduce Nash Merging of Experts (NAMEx), a novel framework that incorporates Nash Bargaining into the merging process, enabling more balanced and efficient collaboration among experts. Additionally, we incorporate complex momentum into NAMEx to accelerate expert propagation with theoretical guarantees for convergence. Extensive experiments across language modelling, text classification, image classification, and zero-shot robustness under data corruption show that NAMEx consistently outperforms competing methods while integrating seamlessly with popular MoE architectures. Finally, we demonstrate NAMEx's scalability by applying it to large-scale systems, including Qwen1.5-MoE (14B) and DeepSeek-MoE (16B), where it proves effective in both zero-shot and fine-tuning settings.
Problem

Research questions and friction points this paper is trying to address.

Improving expert merging strategies in Sparse Mixture of Experts
Establishing principled weighting mechanisms through game theory
Enabling balanced collaboration among experts with theoretical guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Nash Bargaining for balanced expert collaboration
Incorporates complex momentum to accelerate expert propagation
Seamlessly integrates with and scales large MoE architectures
🔎 Similar Papers
No similar papers found.
D
Dung V. Nguyen
Department of Mathematics, National University of Singapore
A
Anh T. Nguyen
Viettel AI, Viettel Group
M
Minh H. Nguyen
Faculty of Mathematics and Informatics, Hanoi University of Science and Technology
L
Luc Q. Nguyen
Viettel AI, Viettel Group
S
Shiqi Jiang
Department of Mathematics, National University of Singapore
Ethan Fetaya
Ethan Fetaya
Bar-Ilan University
Machine learningComputer vision
L
Linh Duy Tran
Viettel AI, Viettel Group
Gal Chechik
Gal Chechik
NVIDIA, Bar Ilan University
Machine learningAIMachine perception
T
Tan M. Nguyen
Department of Mathematics, National University of Singapore