M-Loss: Quantifying Model Merging Compatibility with Limited Unlabeled Data

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of conventional model fusion approaches—such as parameter averaging—which often introduce non-generalizable features when source models differ substantially and lack theoretical grounding or effective metrics for assessing fusion compatibility. The study establishes, for the first time, a theoretical connection between model fusion and ensemble learning, and introduces M-Loss, a layer- or neuron-level inconsistency metric computed using only a small amount of unlabeled data to quantify the discrepancy between parameter averaging and model ensembling. This metric effectively guides the optimization of fusion strategies and the evaluation of parameter importance, thereby enhancing pruning efficiency. Experiments demonstrate that M-Loss significantly improves alignment between fused and ensemble models, enabling efficient and accurate model integration while reducing inference cost and storage overhead.

Technology Category

Application Category

📝 Abstract
Training of large-scale models is both computationally intensive and often constrained by the availability of labeled data. Model merging offers a compelling alternative by directly integrating the weights of multiple source models without requiring additional data or extensive training. However, conventional model merging techniques, such as parameter averaging, often suffer from the unintended combination of non-generalizable features, especially when source models exhibit significant weight disparities. Comparatively, model ensembling generally provides more stable and superior performance that aggregates multiple models by averaging outputs. However, it incurs higher inference costs and increased storage requirements. While previous studies experimentally showed the similarities between model merging and ensembling, theoretical evidence and evaluation metrics remain lacking. To address this gap, we introduce Merging-ensembling loss (M-Loss), a novel evaluation metric that quantifies the compatibility of merging source models using very limited unlabeled data. By measuring the discrepancy between parameter averaging and model ensembling at layer and node levels, M-Loss facilitates more effective merging strategies. Specifically, M-Loss serves both as a quantitative criterion of the theoretical feasibility of model merging, and a guide for parameter significance in model pruning. Our theoretical analysis and empirical evaluations demonstrate that incorporating M-Loss into the merging process significantly improves the alignment between merged models and model ensembling, providing a scalable and efficient framework for accurate model consolidation.
Problem

Research questions and friction points this paper is trying to address.

model merging
model ensembling
compatibility evaluation
limited unlabeled data
merging metric
Innovation

Methods, ideas, or system contributions that make the work stand out.

M-Loss
model merging
model ensembling
parameter compatibility
unlabeled data
🔎 Similar Papers
No similar papers found.
T
Tiantong Wang
College of Computing and Data Science, Nanyang Technological University
Y
Yiyang Duan
College of Computing and Data Science, Nanyang Technological University
H
Haoyu Chen
College of Computing and Data Science, Nanyang Technological University and School of Computer and Information Technology, Beijing Jiaotong University
Tiantong Wu
Tiantong Wu
Nanyang Technological University, Singapore
BlockchainInternet-of-thingsFederated Learning
Wei Yang Bryan Lim
Wei Yang Bryan Lim
Assistant Professor, Nanyang Technological University (NTU), Singapore
Edge IntelligenceFederated LearningApplied AISustainable AI