MAS-GPT: Training LLMs to Build LLM-based Multi-Agent Systems

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-driven multi-agent systems (MAS) rely on manual configuration or repeated LLM invocations, resulting in poor adaptability and high computational cost. This work reframes MAS construction as an end-to-end generative language task: given a user query, a single LLM forward pass generates executable, query-adaptive MAS code. Our key contributions are threefold: (1) a novel executable-code representation for MAS, enabling direct execution and compositional reasoning; (2) a consistency-guided pipeline for constructing high-quality, diverse query-MAS instruction data; and (3) the first open-source medium-sized LLM specifically trained for MAS generation. Evaluated across nine benchmarks and five LLM families, our method outperforms over ten baselines in MAS construction efficiency, cross-query generalization, and response quality—achieving significant gains in both automation fidelity and practical deployability.

Technology Category

Application Category

📝 Abstract
LLM-based multi-agent systems (MAS) have shown significant potential in tackling diverse tasks. However, to design effective MAS, existing approaches heavily rely on manual configurations or multiple calls of advanced LLMs, resulting in inadaptability and high inference costs. In this paper, we simplify the process of building an MAS by reframing it as a generative language task, where the input is a user query and the output is a corresponding MAS. To address this novel task, we unify the representation of MAS as executable code and propose a consistency-oriented data construction pipeline to create a high-quality dataset comprising coherent and consistent query-MAS pairs. Using this dataset, we train MAS-GPT, an open-source medium-sized LLM that is capable of generating query-adaptive MAS within a single LLM inference. The generated MAS can be seamlessly applied to process user queries and deliver high-quality responses. Extensive experiments on 9 benchmarks and 5 LLMs show that the proposed MAS-GPT consistently outperforms 10+ baseline MAS methods on diverse settings, indicating MAS-GPT's high effectiveness, efficiency and strong generalization ability. Code will be available at https://github.com/rui-ye/MAS-GPT.
Problem

Research questions and friction points this paper is trying to address.

Simplify building LLM-based multi-agent systems (MAS).
Reduce manual configurations and high inference costs.
Generate query-adaptive MAS within a single LLM inference.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reframes MAS construction as generative language task
Unifies MAS representation as executable code
Trains MAS-GPT for single-inference adaptive MAS generation
🔎 Similar Papers
No similar papers found.