🤖 AI Summary
Sparse Mixture-of-Experts (SMoE) models lack principled, theoretically grounded weighting mechanisms for expert fusion. Method: This paper proposes a game-theoretic expert collaboration framework, wherein the Nash bargaining solution is employed to formally model cooperative and competitive interactions among experts, yielding an interpretable, dynamically adaptive weighting strategy; further integrated with a complex-domain momentum optimizer possessing guaranteed convergence, enhancing both parameter fusion efficiency and stability. Contribution/Results: The method is architecture-agnostic—compatible with mainstream MoE designs without modifying base model structures. Experiments demonstrate significant improvements over existing fusion approaches across language modeling, text classification, and robustness benchmarks. It has been successfully deployed on Qwen1.5-MoE (14B) and DeepSeek-MoE (16B), achieving state-of-the-art performance in both zero-shot and fine-tuning settings.
📝 Abstract
Existing expert merging strategies for Sparse Mixture of Experts (SMoE) typically rely on input-dependent or input-independent averaging of expert parameters, but often lack a principled weighting mechanism. In this work, we reinterpret expert merging through the lens of game theory, revealing cooperative and competitive dynamics among experts. Based on this perspective, we introduce Nash Merging of Experts (NAMEx), a novel framework that incorporates Nash Bargaining into the merging process, enabling more balanced and efficient collaboration among experts. Additionally, we incorporate complex momentum into NAMEx to accelerate expert propagation with theoretical guarantees for convergence. Extensive experiments across language modelling, text classification, image classification, and zero-shot robustness under data corruption show that NAMEx consistently outperforms competing methods while integrating seamlessly with popular MoE architectures. Finally, we demonstrate NAMEx's scalability by applying it to large-scale systems, including Qwen1.5-MoE (14B) and DeepSeek-MoE (16B), where it proves effective in both zero-shot and fine-tuning settings.