🤖 AI Summary
This work uncovers a critical security vulnerability in large language model–based multi-agent systems (LLM-MAS): the message-passing communication layer is susceptible to systemic compromise—where attackers hijack or manipulate inter-agent messages to induce system-wide failures, rather than merely compromising individual agents. To address this, we propose Agent-in-the-Middle (AiTM), the first attack paradigm explicitly targeting the communication layer. AiTM employs reflective, LLM-driven adversarial agents capable of context-aware malicious instruction generation, integrated with cross-framework communication protocol reverse engineering and injection, as well as multi-topology communication testing. Evaluated across mainstream LLM-MAS frameworks and real-world applications, AiTM achieves up to 92% task failure rates. This study is the first to systematically expose communication-layer risks in LLM-MAS, shifting the security paradigm from agent-centric hardening toward communication-layer defense.
📝 Abstract
Large Language Model-based Multi-Agent Systems (LLM-MAS) have revolutionized complex problem-solving capability by enabling sophisticated agent collaboration through message-based communications. While the communication framework is crucial for agent coordination, it also introduces a critical yet unexplored security vulnerability. In this work, we introduce Agent-in-the-Middle (AiTM), a novel attack that exploits the fundamental communication mechanisms in LLM-MAS by intercepting and manipulating inter-agent messages. Unlike existing attacks that compromise individual agents, AiTM demonstrates how an adversary can compromise entire multi-agent systems by only manipulating the messages passing between agents. To enable the attack under the challenges of limited control and role-restricted communication format, we develop an LLM-powered adversarial agent with a reflection mechanism that generates contextually-aware malicious instructions. Our comprehensive evaluation across various frameworks, communication structures, and real-world applications demonstrates that LLM-MAS is vulnerable to communication-based attacks, highlighting the need for robust security measures in multi-agent systems.