Bypassing AI Control Protocols via Agent-as-a-Proxy Attacks

πŸ“… 2026-02-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current AI agent defense mechanisms based on monitoring protocols are vulnerable to indirect prompt injection attacks and struggle to ensure alignment with user intent. This work proposes a novel attack paradigm, β€œAgent-as-a-Proxy,” which exploits the agent itself as a conduit for malicious payloads. By orchestrating chain-of-thought reasoning and tool-use capabilities, the attacker can bypass both the agent’s internal safeguards and external monitoring systems. Empirical evaluation demonstrates that even state-of-the-art monitoring models, such as Qwen2.5-72B, can be effectively circumvented by comparably capable agents like GPT-4o mini and Llama-3.1-70B. On the AgentDojo benchmark, this approach achieves high success rates against mainstream defenses including AlignmentCheck and Extract-and-Evaluate, exposing fundamental security flaws in prevailing monitoring paradigms.

Technology Category

Application Category

πŸ“ Abstract
As AI agents automate critical workloads, they remain vulnerable to indirect prompt injection (IPI) attacks. Current defenses rely on monitoring protocols that jointly evaluate an agent's Chain-of-Thought (CoT) and tool-use actions to ensure alignment with user intent. We demonstrate that these monitoring-based defenses can be bypassed via a novel Agent-as-a-Proxy attack, where prompt injection attacks treat the agent as a delivery mechanism, bypassing both agent and monitor simultaneously. While prior work on scalable oversight has focused on whether small monitors can supervise large agents, we show that even frontier-scale monitors are vulnerable. Large-scale monitoring models like Qwen2.5-72B can be bypassed by agents with similar capabilities, such as GPT-4o mini and Llama-3.1-70B. On the AgentDojo benchmark, we achieve a high attack success rate against AlignmentCheck and Extract-and-Evaluate monitors under diverse monitoring LLMs. Our findings suggest current monitoring-based agentic defenses are fundamentally fragile regardless of model scale.
Problem

Research questions and friction points this paper is trying to address.

AI control protocols
indirect prompt injection
agent monitoring
alignment
adversarial attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agent-as-a-Proxy
indirect prompt injection
monitoring-based defense
Chain-of-Thought
AI alignment