🤖 AI Summary
Existing security evaluations for LLM-based multi-agent systems are severely inadequate; mainstream benchmarks focus exclusively on single-agent settings and fail to capture novel vulnerabilities introduced by collaborative dynamics. Method: We introduce TAMAS, the first security benchmark specifically designed for multi-agent LLM systems. It encompasses five collaborative scenarios, six adversarial attack types, 211 tool invocations, 300 adversarial instances, and 100 benign tasks, enabling joint evaluation of robustness and effectiveness across ten mainstream models under three collaboration architectures. Contribution/Results: We propose the Effective Robustness Score (ERS), the first metric to systematically quantify the security–efficacy trade-off in multi-agent systems. Empirical evaluation using Autogen and CrewAI frameworks reveals widespread susceptibility to adversarial attacks and weak defense capabilities. TAMAS provides a reproducible, extensible evaluation infrastructure to advance research on multi-agent system security.
📝 Abstract
Large Language Models (LLMs) have demonstrated strong capabilities as autonomous agents through tool use, planning, and decision-making abilities, leading to their widespread adoption across diverse tasks. As task complexity grows, multi-agent LLM systems are increasingly used to solve problems collaboratively. However, safety and security of these systems remains largely under-explored. Existing benchmarks and datasets predominantly focus on single-agent settings, failing to capture the unique vulnerabilities of multi-agent dynamics and co-ordination. To address this gap, we introduce $ extbf{T}$hreats and $ extbf{A}$ttacks in $ extbf{M}$ulti-$ extbf{A}$gent $ extbf{S}$ystems ($ extbf{TAMAS}$), a benchmark designed to evaluate the robustness and safety of multi-agent LLM systems. TAMAS includes five distinct scenarios comprising 300 adversarial instances across six attack types and 211 tools, along with 100 harmless tasks. We assess system performance across ten backbone LLMs and three agent interaction configurations from Autogen and CrewAI frameworks, highlighting critical challenges and failure modes in current multi-agent deployments. Furthermore, we introduce Effective Robustness Score (ERS) to assess the tradeoff between safety and task effectiveness of these frameworks. Our findings show that multi-agent systems are highly vulnerable to adversarial attacks, underscoring the urgent need for stronger defenses. TAMAS provides a foundation for systematically studying and improving the safety of multi-agent LLM systems.