π€ AI Summary
Model merging often suffers from unpredictable performance, limiting its practical utility. This work introduces the first quantifiable definition of model mergeability and systematically investigates the key factors influencing merging effectiveness, identifying the base modelβs prior knowledge about the fine-tuning data as the decisive factor. Building on this insight, the authors propose a weighted parameter fusion strategy that effectively preserves weak yet relevant knowledge embedded in the base model. Experimental results demonstrate that the proposed method significantly enhances merging performance in multi-task settings, thereby validating the critical role of the base modelβs knowledge level in determining the success of model merging.
π Abstract
Model merging has emerged as a promising technique for combining multiple fine-tuned models into a single multitask model without retraining. However, the factors that determine whether merging will succeed or fail remain poorly understood. In this work, we investigate why specific models are merged better than others. To do so, we propose a concrete, measurable definition of mergeability. We investigate several potential causes for high or low mergeability, highlighting the base model knowledge as a dominant factor: Models fine-tuned on instances that the base model knows better are more mergeable than models fine-tuned on instances that the base model struggles with. Based on our mergeability definition, we explore a simple weighted merging technique that better preserves weak knowledge in the base model.