π€ AI Summary
Current AI agent defense mechanisms based on monitoring protocols are vulnerable to indirect prompt injection attacks and struggle to ensure alignment with user intent. This work proposes a novel attack paradigm, βAgent-as-a-Proxy,β which exploits the agent itself as a conduit for malicious payloads. By orchestrating chain-of-thought reasoning and tool-use capabilities, the attacker can bypass both the agentβs internal safeguards and external monitoring systems. Empirical evaluation demonstrates that even state-of-the-art monitoring models, such as Qwen2.5-72B, can be effectively circumvented by comparably capable agents like GPT-4o mini and Llama-3.1-70B. On the AgentDojo benchmark, this approach achieves high success rates against mainstream defenses including AlignmentCheck and Extract-and-Evaluate, exposing fundamental security flaws in prevailing monitoring paradigms.
π Abstract
As AI agents automate critical workloads, they remain vulnerable to indirect prompt injection (IPI) attacks. Current defenses rely on monitoring protocols that jointly evaluate an agent's Chain-of-Thought (CoT) and tool-use actions to ensure alignment with user intent. We demonstrate that these monitoring-based defenses can be bypassed via a novel Agent-as-a-Proxy attack, where prompt injection attacks treat the agent as a delivery mechanism, bypassing both agent and monitor simultaneously. While prior work on scalable oversight has focused on whether small monitors can supervise large agents, we show that even frontier-scale monitors are vulnerable. Large-scale monitoring models like Qwen2.5-72B can be bypassed by agents with similar capabilities, such as GPT-4o mini and Llama-3.1-70B. On the AgentDojo benchmark, we achieve a high attack success rate against AlignmentCheck and Extract-and-Evaluate monitors under diverse monitoring LLMs. Our findings suggest current monitoring-based agentic defenses are fundamentally fragile regardless of model scale.