TAMAS: Benchmarking Adversarial Risks in Multi-Agent LLM Systems

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing security evaluations for LLM-based multi-agent systems are severely inadequate; mainstream benchmarks focus exclusively on single-agent settings and fail to capture novel vulnerabilities introduced by collaborative dynamics. Method: We introduce TAMAS, the first security benchmark specifically designed for multi-agent LLM systems. It encompasses five collaborative scenarios, six adversarial attack types, 211 tool invocations, 300 adversarial instances, and 100 benign tasks, enabling joint evaluation of robustness and effectiveness across ten mainstream models under three collaboration architectures. Contribution/Results: We propose the Effective Robustness Score (ERS), the first metric to systematically quantify the security–efficacy trade-off in multi-agent systems. Empirical evaluation using Autogen and CrewAI frameworks reveals widespread susceptibility to adversarial attacks and weak defense capabilities. TAMAS provides a reproducible, extensible evaluation infrastructure to advance research on multi-agent system security.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated strong capabilities as autonomous agents through tool use, planning, and decision-making abilities, leading to their widespread adoption across diverse tasks. As task complexity grows, multi-agent LLM systems are increasingly used to solve problems collaboratively. However, safety and security of these systems remains largely under-explored. Existing benchmarks and datasets predominantly focus on single-agent settings, failing to capture the unique vulnerabilities of multi-agent dynamics and co-ordination. To address this gap, we introduce $ extbf{T}$hreats and $ extbf{A}$ttacks in $ extbf{M}$ulti-$ extbf{A}$gent $ extbf{S}$ystems ($ extbf{TAMAS}$), a benchmark designed to evaluate the robustness and safety of multi-agent LLM systems. TAMAS includes five distinct scenarios comprising 300 adversarial instances across six attack types and 211 tools, along with 100 harmless tasks. We assess system performance across ten backbone LLMs and three agent interaction configurations from Autogen and CrewAI frameworks, highlighting critical challenges and failure modes in current multi-agent deployments. Furthermore, we introduce Effective Robustness Score (ERS) to assess the tradeoff between safety and task effectiveness of these frameworks. Our findings show that multi-agent systems are highly vulnerable to adversarial attacks, underscoring the urgent need for stronger defenses. TAMAS provides a foundation for systematically studying and improving the safety of multi-agent LLM systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluating adversarial vulnerabilities in multi-agent LLM systems
Assessing safety risks across diverse attack types and tools
Measuring robustness-performance tradeoffs in collaborative AI frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

TAMAS benchmark evaluates multi-agent LLM system robustness
Includes 300 adversarial instances across six attack types
Introduces Effective Robustness Score for safety assessment
🔎 Similar Papers
No similar papers found.
Ishan Kavathekar
Ishan Kavathekar
Undergraduate Researcher, Precog, IIIT-Hyderabad
H
Hemang Jain
International Institute of Information Technology, Hyderabad
A
Ameya Rathod
International Institute of Information Technology, Hyderabad
P
P. Kumaraguru
International Institute of Information Technology, Hyderabad
Tanuja Ganu
Tanuja Ganu
Microsoft Research
Machine LearningAI for Social GoodOptimization