🤖 AI Summary
Large language model (LLM)-based multi-agent systems face challenges in robustness and collaborative efficiency under agent failures—whether due to incompetence or adversarial behavior.
Method: This work introduces the first systematic quantification of fault tolerance across diverse agent topologies and proposes AutoTransform/AutoInject, automated fault-injection frameworks. It further designs a dual-enhancement paradigm comprising an output-challenge mechanism and a dedicated review agent, integrated with structured message routing and response correction techniques.
Contribution/Results: Experiments identify the hierarchical A→(B↔C) topology as optimal in robustness, sustaining only a 9.2% performance degradation under failure. The synergistic challenge-and-review strategy significantly improves fault tolerance. All code, datasets, and evaluation frameworks are publicly released, establishing a reproducible benchmark and novel methodology for robustness research in LLM-based multi-agent systems.
📝 Abstract
Large language model-based multi-agent systems have shown great abilities across various tasks due to the collaboration of expert agents, each focusing on a specific domain. However, the impact of clumsy or even malicious agents, i.e., those who frequently make errors in their tasks, on the overall performance of the system remains underexplored. This paper investigates: (1) What is the resilience of various system structures (e.g., A$
ightarrow$B$
ightarrow$C, A$leftrightarrow$B$leftrightarrow$C) under faulty agents, on different downstream tasks? (2) How can we increase system resilience to defend against these agents? To simulate faulty agents, we propose two approaches, AutoTransform and AutoInject, which introduce mistakes into the agents' responses. We select four downstream tasks, including code generation, math problems, translation, and text evaluation. Results suggest that the hierarchical structure, i.e., A$
ightarrow$(B$leftrightarrow$C), exhibits superior resilience with the lowest performance drop of $9.2%$, compared to $26.0%$ and $31.2%$ of other two structures. Additionally, we improve the system resilience with two methods, introducing a mechanism for each agent to challenge others' outputs, and an additional agent to review and correct messages. Our code and data are available at https://github.com/CUHK-ARISE/MAS-Resilience.