Black-box Model Merging for Language-Model-as-a-Service with Massive Model Repositories

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of merging large-scale black-box language models (e.g., GPT-4) whose weights are inaccessible. We propose Evo-Merging, the first purely API-driven, gradient-free model merging framework. It employs an evolutionary algorithm incorporating sparsity-aware denoising, sign-aware scaling, and asymmetric sparsification to optimize ensemble weights solely via inference-time API queries. Unlike existing methods requiring gradient computation or direct parameter access, Evo-Merging operates without any internal model knowledge or weight visibility. Evaluated on multi-task benchmarks, it significantly outperforms strong baselines—including Task Arithmetic and DARE—demonstrating for the first time that efficient, scalable language model merging is feasible in the black-box setting. Evo-Merging establishes a novel paradigm for collaborative model composition in the API-first era, enabling effective integration of proprietary models without architectural or parameter exposure.

Technology Category

Application Category

📝 Abstract
Model merging refers to the process of integrating multiple distinct models into a unified model that preserves and combines the strengths and capabilities of the individual models. Most existing approaches rely on task vectors to combine models, typically under the assumption that model parameters are accessible. However, for extremely large language models (LLMs) such as GPT-4, which are often provided solely as black-box services through API interfaces (Language-Model-as-a-Service), model weights are not available to end users. This presents a significant challenge, which we refer to as black-box model merging (BMM) with massive LLMs. To address this challenge, we propose a derivative-free optimization framework based on the evolutionary algorithm (Evo-Merging) that enables effective model merging using only inference-time API queries. Our method consists of two key components: (1) sparsity-based denoising, designed to identify and filter out irrelevant or redundant information across models, and (2) sign-aware scaling, which dynamically computes optimal combination weights for the relevant models based on their performance. We also provide a formal justification, along with a theoretical analysis, for our asymmetric sparsification. Extensive experimental evaluations demonstrate that our approach achieves state-of-the-art results on a range of tasks, significantly outperforming existing strong baselines.
Problem

Research questions and friction points this paper is trying to address.

Merging black-box language models without access to weights
Combining multiple API-based LLMs into unified model
Overcoming parameter inaccessibility in Language-Model-as-a-Service scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolutionary algorithm for black-box merging
Sparsity-based denoising across models
Sign-aware scaling for optimal weights
🔎 Similar Papers
No similar papers found.
S
Shilian Chen
School of Computer Science and Technology, East China Normal University, Shanghai
J
Jie Zhou
School of Computer Science and Technology, East China Normal University, Shanghai
Tianyu Huai
Tianyu Huai
East China Normal University
Continual Learning
Y
Yujiang Lu
School of Computer Science and Technology, East China Normal University, Shanghai
Junsong Li
Junsong Li
East China Normal University
NLPLLMNLI
Bihao Zhan
Bihao Zhan
East China Normal University
CLLLMRAGKG
Qianjun Pan
Qianjun Pan
East China Normal University
LLM
Y
Yutao Yang
School of Computer Science and Technology, East China Normal University, Shanghai
X
Xin Li
Shanghai AI Laboratory
Q
Qin Chen
School of Computer Science and Technology, East China Normal University, Shanghai
H
Hang Yan
The Chinese University of HongKong
L
Liang He
School of Computer Science and Technology, East China Normal University, Shanghai