Why Do Multi-Agent LLM Systems Fail?

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-agent large language model systems (MAS) exhibit limited performance gains on benchmarks, and their failure mechanisms remain poorly understood. Method: Grounded in 150+ tasks and expert annotations from six domain specialists, we propose the first fine-grained taxonomy of 14 MAS failure modes—spanning design, coordination, and termination dimensions—with high inter-annotator agreement (Kappa=0.88). We further introduce and open-source MASFT, an evaluation framework incorporating LLM-as-a-Judge criteria and a curated annotation dataset. Contribution/Results: Cross-platform empirical analysis reveals that common interventions—such as role assignment and workflow orchestration—fail to resolve core failures, exposing critical robustness deficits in existing MAS approaches. This work establishes the first reproducible, scalable benchmarking paradigm for MAS failure analysis and identifies concrete directions for enhancing multi-agent system reliability.

Technology Category

Application Category

📝 Abstract
Despite growing enthusiasm for Multi-Agent Systems (MAS), where multiple LLM agents collaborate to accomplish tasks, their performance gains across popular benchmarks remain minimal compared to single-agent frameworks. This gap highlights the need to analyze the challenges hindering MAS effectiveness. In this paper, we present the first comprehensive study of MAS challenges. We analyze five popular MAS frameworks across over 150 tasks, involving six expert human annotators. We identify 14 unique failure modes and propose a comprehensive taxonomy applicable to various MAS frameworks. This taxonomy emerges iteratively from agreements among three expert annotators per study, achieving a Cohen's Kappa score of 0.88. These fine-grained failure modes are organized into 3 categories, (i) specification and system design failures, (ii) inter-agent misalignment, and (iii) task verification and termination. To support scalable evaluation, we integrate MASFT with LLM-as-a-Judge. We also explore if identified failures could be easily prevented by proposing two interventions: improved specification of agent roles and enhanced orchestration strategies. Our findings reveal that identified failures require more complex solutions, highlighting a clear roadmap for future research. We open-source our dataset and LLM annotator.
Problem

Research questions and friction points this paper is trying to address.

Analyzes challenges in Multi-Agent LLM Systems (MAS) effectiveness.
Identifies 14 failure modes across 150 tasks in MAS frameworks.
Proposes taxonomy and interventions for MAS design and alignment.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive taxonomy for MAS failure modes
Integration of MASFT with LLM-as-a-Judge
Proposed interventions for agent role specification
🔎 Similar Papers
No similar papers found.