Fine-Grained Model Merging via Modular Expert Recombination

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a modular expert recombination framework to address the limitations of existing model fusion approaches, which typically treat task-specific models as monolithic entities and lack component-level granularity and module reusability. The framework constructs a reusable library of component-level experts and employs a lightweight dynamic routing network to adaptively assemble an optimal sub-model at inference time based on the input. The fusion process is formulated as a bi-objective optimization problem, and a surrogate-assisted evolutionary algorithm efficiently searches for Pareto-optimal configurations. Extensive experiments demonstrate that the proposed method consistently outperforms strong baselines across diverse model scales, task types, and fine-tuning strategies, achieving superior generalization, inference efficiency, and storage economy.

Technology Category

Application Category

📝 Abstract
Model merging constructs versatile models by integrating task-specific models without requiring labeled data or expensive joint retraining. Although recent methods improve adaptability to heterogeneous tasks by generating customized merged models for each instance, they face two critical limitations. First, the instance-specific merged models lack reusability, restricting the exploitation of high-quality merging configurations and efficient batch inference. Second, these methods treat each task-specific model as a monolithic whole, overlooking the diverse mergeability of homologous components such as attention and multilayer perceptron layers, and the differing merging sensitivities across components. To address these limitations, we propose MERGE (\underline{M}odular \underline{E}xpert \underline{R}ecombination for fine-\underline{G}rained m\underline{E}rging), a method that enables component-wise model merging and input-aware, on-demand module recombination at inference. MERGE formulates component-wise merging as a bi-objective optimization problem that balances cross-task performance and storage efficiency, and develops a surrogate-assisted evolutionary algorithm to efficiently identify Pareto-optimal merging configurations. These high-quality configurations underpin a reusable modular expert library, from which a lightweight routing network dynamically activates and recombines modular experts to assemble input-specific models and enable efficient inference under storage constraints. Extensive experiments across various model scales, task types, and fine-tuning strategies demonstrate that MERGE consistently outperforms strong baselines and generalizes effectively.
Problem

Research questions and friction points this paper is trying to address.

model merging
fine-grained merging
modular recombination
mergeability
heterogeneous tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

model merging
modular experts
component-wise fusion
multi-objective optimization
dynamic routing
H
Haiyun Qiu
Department of Data Science and Artificial Intelligence, The Hong Kong Polytechnic University, Hong Kong, SAR 999077, China
Xingyu Wu
Xingyu Wu
Hong Kong Polytechnic University
Automated machine learningCausality-based machine learningLarge foundation modelAutoML
Liang Feng
Liang Feng
Chongqing University
Computational IntelligenceTransfer OptimizationMulti-Task OptimizationMulti-Agent System
K
Kay Chen Tan
Department of Data Science and Artificial Intelligence, The Hong Kong Polytechnic University, Hong Kong, SAR 999077, China