🤖 AI Summary
This work addresses the challenge of merging large-scale black-box language models (e.g., GPT-4) whose weights are inaccessible. We propose Evo-Merging, the first purely API-driven, gradient-free model merging framework. It employs an evolutionary algorithm incorporating sparsity-aware denoising, sign-aware scaling, and asymmetric sparsification to optimize ensemble weights solely via inference-time API queries. Unlike existing methods requiring gradient computation or direct parameter access, Evo-Merging operates without any internal model knowledge or weight visibility. Evaluated on multi-task benchmarks, it significantly outperforms strong baselines—including Task Arithmetic and DARE—demonstrating for the first time that efficient, scalable language model merging is feasible in the black-box setting. Evo-Merging establishes a novel paradigm for collaborative model composition in the API-first era, enabling effective integration of proprietary models without architectural or parameter exposure.
📝 Abstract
Model merging refers to the process of integrating multiple distinct models into a unified model that preserves and combines the strengths and capabilities of the individual models. Most existing approaches rely on task vectors to combine models, typically under the assumption that model parameters are accessible. However, for extremely large language models (LLMs) such as GPT-4, which are often provided solely as black-box services through API interfaces (Language-Model-as-a-Service), model weights are not available to end users. This presents a significant challenge, which we refer to as black-box model merging (BMM) with massive LLMs. To address this challenge, we propose a derivative-free optimization framework based on the evolutionary algorithm (Evo-Merging) that enables effective model merging using only inference-time API queries. Our method consists of two key components: (1) sparsity-based denoising, designed to identify and filter out irrelevant or redundant information across models, and (2) sign-aware scaling, which dynamically computes optimal combination weights for the relevant models based on their performance. We also provide a formal justification, along with a theoretical analysis, for our asymmetric sparsification. Extensive experimental evaluations demonstrate that our approach achieves state-of-the-art results on a range of tasks, significantly outperforming existing strong baselines.