OMNI-LEAK: Orchestrator Multi-Agent Network Induced Data Leakage

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical security vulnerability in orchestrator-based multi-agent systems, where attackers can circumvent data access controls through indirect prompt injection, leading to cross-agent leakage of sensitive information. We introduce OMNI-LEAK, the first attack vector specifically designed to exploit this architecture, and systematically evaluate the security of state-of-the-art large language models—including both reasoning and non-reasoning variants—through red-teaming exercises and multi-agent simulation. Our experiments demonstrate that a single indirect prompt injection can manipulate multiple agents simultaneously, resulting in significant data exfiltration. These findings reveal pervasive security blind spots in current multi-agent systems and underscore the urgent need to evolve from single-agent to multi-agent security paradigms.

Technology Category

Application Category

📝 Abstract
As Large Language Model (LLM) agents become more capable, their coordinated use in the form of multi-agent systems is anticipated to emerge as a practical paradigm. Prior work has examined the safety and misuse risks associated with agents. However, much of this has focused on the single-agent case and/or setups missing basic engineering safeguards such as access control, revealing a scarcity of threat modeling in multi-agent systems. We investigate the security vulnerabilities of a popular multi-agent pattern known as the orchestrator setup, in which a central agent decomposes and delegates tasks to specialized agents. Through red-teaming a concrete setup representative of a likely future use case, we demonstrate a novel attack vector, OMNI-LEAK, that compromises several agents to leak sensitive data through a single indirect prompt injection, even in the \textit{presence of data access control}. We report the susceptibility of frontier models to different categories of attacks, finding that both reasoning and non-reasoning models are vulnerable, even when the attacker lacks insider knowledge of the implementation details. Our work highlights the importance of safety research to generalize from single-agent to multi-agent settings, in order to reduce the serious risks of real-world privacy breaches and financial losses and overall public trust in AI agents.
Problem

Research questions and friction points this paper is trying to address.

multi-agent systems
data leakage
prompt injection
access control
LLM agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent systems
prompt injection
data leakage
orchestrator architecture
LLM security
🔎 Similar Papers
No similar papers found.