🤖 AI Summary
To address the challenges of integrating multi-domain heterogeneous expert large language models—namely, architectural incompatibility, severe parameter interference, and high fine-tuning costs—this paper proposes a unified Mixture-of-Experts (MoE) model merging framework. Our key contributions are threefold: (1) a novel heterogeneous expert alignment and mapping mechanism enabling seamless integration of both homogeneous and heterogeneous experts; (2) a parameter-interference-resilient weighted fusion strategy coupled with a lightweight dynamic routing heuristic, drastically reducing reliance on task-specific fine-tuning; and (3) multi-objective performance distillation to jointly optimize domain specialization and general-purpose capability. Evaluated on diverse benchmarks—including mathematical reasoning and code generation—our method outperforms existing state-of-the-art merging approaches, reduces fine-tuning cost by over 60%, and achieves substantial gains in generalization and robustness.
📝 Abstract
The recent success of specialized Large Language Models (LLMs) in domains such as mathematical reasoning and coding has led to growing interest in methods for merging these expert LLMs into a unified Mixture-of-Experts (MoE) model, with the goal of enhancing performance in each domain while retaining effectiveness on general tasks. However, the effective merging of expert models remains an open challenge, especially for models with highly divergent weight parameters or different architectures. State-of-the-art MoE merging methods only work with homogeneous model architectures and rely on simple unweighted averaging to merge expert layers, which does not address parameter interference and requires extensive fine-tuning of the merged MoE to restore performance. To address these limitations, this paper introduces new MoE merging techniques, including strategies to mitigate parameter interference, routing heuristics to reduce the need for MoE fine-tuning, and a novel method for merging experts with different architectures. Extensive experiments across multiple domains demonstrate the effectiveness of our proposed methods, reducing fine-tuning costs, improving performance over state-of-the-art methods, and expanding the applicability of MoE merging.