🤖 AI Summary
Instruction-tuned large language models (LLMs) often require substantial labeled data and computational resources for performance gains, while existing parameter merging methods lack principled optimization objectives and yield unpredictable outcomes. Method: This paper proposes Extrapolation Merging—a novel paradigm that introduces model extrapolation into parameter merging for the first time. It constructs a locally optimized search path to provide interpretable, directionally guided merging without additional training, labeling, or computational overhead—relying solely on pre-existing instruction-tuned models for parameter extrapolation and fusion. Contribution/Results: Evaluated across seven downstream tasks, Extrapolation Merging delivers consistent performance improvements, significantly enhancing the reliability and generalizability of merged models. It overcomes the core limitations of conventional merging approaches—namely, ambiguous optimization directions and unstable efficacy—establishing a more principled and controllable framework for model composition.
📝 Abstract
Large Language Models (LLMs) require instruction fine-tuning to perform different downstream tasks. However, the instruction fine-tuning phase still demands significant computational resources and labeled data, lacking a paradigm that can improve model performance without additional computational power and data. Model merging aims to enhance performance by combining the parameters of different models, but the lack of a clear optimization direction during the merging process does not always guarantee improved performance. In this paper, we attempt to provide a clear optimization direction for model merging. We first validate the effectiveness of the model extrapolation method during the instruction fine-tuning phase. Then, we propose Extrapolation Merging, a paradigm that can continue improving model performance without requiring extra computational resources or data. Using the extrapolation method, we provide a clear direction for model merging, achieving local optimization search, and consequently enhancing the merged model's performance. We conduct experiments on seven different tasks, and the results show that our method can consistently improve the model's performance after fine-tuning.