🤖 AI Summary
This work proposes SimMerge, a novel approach to large language model merging that circumvents the costly trial-and-error evaluation typically required to select operators, subsets, and merging orders. SimMerge leverages task-agnostic inter-model similarity signals—capturing both functional and structural characteristics—to predict pairwise merging performance using only a small set of unlabeled probes. By doing so, it efficiently identifies optimal merging strategies without resorting to time-consuming merge-and-evaluate cycles. The method supports dynamic incorporation of new tasks, models, and operators, and seamlessly generalizes to multi-way merges and extremely large models (up to 111B parameters). Experiments demonstrate that SimMerge outperforms standard merging operators in pairwise 7B-model merges, substantially reducing evaluation overhead while maintaining strong performance.
📝 Abstract
Model merging combines multiple models into a single model with aggregated capabilities, making it a powerful tool for large language model (LLM) development. However, scaling model merging is challenging: performance depends on the choice of merge operator, model subset, and merge order, often requiring expensive merge-and-evaluate searches. In this work, we introduce SimMerge, a predictive merge-selection method that identifies high-performing merges using inexpensive, task-agnostic similarity signals between models. Given a small set of unlabeled probes, SimMerge extracts functional and structural features to predict the performance of candidate two-way merges, enabling merge operator, order and model subset selection without iterative evaluation. We show that SimMerge consistently outperforms the best fixed merge operator across 7B-parameter LLMs and generalizes to multi-way merges and 111B-parameter LLMs without retraining. We further introduce a bandit variant that supports adding new tasks and operators online. Our results suggest that learning how to merge enables scalable model composition when checkpoint catalogs are large and evaluation budgets are limited.