🤖 AI Summary
This work addresses the prohibitively high computational cost of evolutionary model merging on consumer-grade GPUs. We propose the first efficient multi-task/multilingual model merging framework grounded in Item Response Theory (IRT). Methodologically, we integrate IRT-based capability modeling, dataset compression, and subset distillation to construct a lightweight fitness estimator—reducing fitness evaluation cost by 50×—and design an IRT-driven evolutionary search strategy enabling cross-lingual knowledge transfer. Our key contribution is the novel application of IRT to model merging evaluation, providing theoretical interpretability and convergence guarantees. Experiments demonstrate state-of-the-art performance on multilingual and cross-lingual model merging tasks, achieving competitive results using only a single consumer GPU. The open-source framework ensures full reproducibility, substantially lowering the hardware barrier for high-quality model merging.
📝 Abstract
Evolutionary model merging enables the creation of high-performing multi-task models but remains computationally prohibitive for consumer hardware. We introduce MERGE$^3$, an efficient framework that makes evolutionary merging feasible on a single GPU by reducing fitness computation costs 50$ imes$ while preserving performance. MERGE$^3$ achieves this by Extracting a reduced dataset for evaluation, Estimating model abilities using Item Response Theory (IRT), and Evolving optimal merges via IRT-based performance estimators. Our method enables state-of-the-art multilingual and cross-lingual merging, transferring knowledge across languages with significantly lower computational overhead. We provide theoretical guarantees and an open-source library, democratizing high-quality model merging.