OFA-MAS: One-for-All Multi-Agent System Topology Design based on Mixture-of-Experts Graph Generative Models

📅 2026-01-19
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalizability of existing multi-agent systems (MAS), which rely on task-specific collaboration topologies and cannot share structural knowledge across tasks. To overcome this, we propose OFA-TAD, a novel framework that pioneers a “one-model-for-all-tasks” paradigm by dynamically generating sparse, adaptive collaboration graphs from arbitrary natural language task descriptions using a single unified model. Our approach integrates a Task-Aware Graph State Encoder (TAGSE), a Mixture-of-Experts (MoE) graph generation architecture, and a three-stage training strategy—comprising unconditional pretraining, conditional pretraining with large-model-generated data, and empirical graph fine-tuning. Evaluated across six diverse benchmarks, OFA-TAD significantly outperforms task-specific methods, enabling unified cross-domain topology generation and efficient knowledge transfer.

Technology Category

Application Category

📝 Abstract
Multi-Agent Systems (MAS) offer a powerful paradigm for solving complex problems, yet their performance is critically dependent on the design of their underlying collaboration topology. As MAS become increasingly deployed in web services (e.g., search engines), designing adaptive topologies for diverse cross-domain user queries becomes essential. Current graph learning-based design methodologies often adhere to a"one-for-one"paradigm, where a specialized model is trained for each specific task domain. This approach suffers from poor generalization to unseen domains and fails to leverage shared structural knowledge across different tasks. To address this, we propose OFA-TAD, a one-for-all framework that generates adaptive collaboration graphs for any task described in natural language through a single universal model. Our approach integrates a Task-Aware Graph State Encoder (TAGSE) that filters task-relevant node information via sparse gating, and a Mixture-of-Experts (MoE) architecture that dynamically selects specialized sub-networks to drive node and edge prediction. We employ a three-stage training strategy: unconditional pre-training on canonical topologies for structural priors, large-scale conditional pre-training on LLM-generated datasets for task-topology mappings, and supervised fine-tuning on empirically validated graphs. Experiments across six diverse benchmarks show that OFA-TAD significantly outperforms specialized one-for-one models, generating highly adaptive MAS topologies. Code: https://github.com/Shiy-Li/OFA-MAS.
Problem

Research questions and friction points this paper is trying to address.

Multi-Agent Systems
collaboration topology
generalization
cross-domain
graph generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Experts
Graph Generative Model
Multi-Agent System Topology
Task-Aware Graph Encoding
One-for-All Framework
🔎 Similar Papers
No similar papers found.