🤖 AI Summary
Existing multi-agent systems (MAS) rely on manually designed roles and communication protocols, limiting the potential of large language models (LLMs) and lacking task adaptability; automated construction methods require validation-set tuning and yield static configurations, precluding dynamic adjustment during inference. This paper proposes SELF-MAS—the first purely inference-time, fully self-supervised MAS construction framework. Its core innovation is a meta-level self-design paradigm: an LLM-driven generate-evaluate-optimize loop enables dynamic agent composition, problem decomposition, and co-evolution of collaboration protocols; meta-feedback mechanisms assess solvability and completeness in real time, enabling instance-level MAS customization without validation data. Evaluated on mathematical reasoning, graduate-level question answering, and software engineering benchmarks, SELF-MAS achieves a 7.44% average accuracy gain over state-of-the-art manual and automated approaches, delivering superior performance–inference cost trade-offs.
📝 Abstract
Multi-agent systems (MAS) leveraging the impressive capabilities of Large Language Models (LLMs) hold significant potential for tackling complex tasks. However, most current MAS depend on manually designed agent roles and communication protocols. These manual designs often fail to align with the underlying LLMs' strengths and struggle to adapt to novel tasks. Recent automatic MAS approaches attempt to mitigate these limitations but typically necessitate a validation-set for tuning and yield static MAS designs lacking adaptability during inference. We introduce SELF-MAS, the first self-supervised, inference-time only framework for automatic MAS design. SELF-MAS employs meta-level design to iteratively generate, evaluate, and refine MAS configurations tailored to each problem instance, without requiring a validation set. Critically, it enables dynamic agent composition and problem decomposition through meta-feedback on solvability and completeness. Experiments across math, graduate-level QA, and software engineering benchmarks, using both closed-source and open-source LLM back-bones of varying sizes, demonstrate that SELF-MAS outperforms both manual and automatic MAS baselines, achieving a 7.44% average accuracy improvement over the next strongest baseline while maintaining cost-efficiency. These findings underscore the promise of meta-level self-supervised design for creating effective and adaptive MAS.