AgentXploit: End-to-End Redteaming of Black-Box AI Agents

📅 2025-05-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the previously overlooked indirect prompt injection (IPI) vulnerability in black-box AI agents—where attackers compromise system prompts, tool descriptions, or other contextual elements rather than user inputs. We propose the first end-to-end automated red-teaming framework for IPI detection. Methodologically, we introduce a Monte Carlo Tree Search (MCTS)-guided seed selection mechanism, integrated with dynamic corpus construction and IPI-specific attack modeling, enabling efficient black-box fuzzing with high success rates, strong cross-instance transferability, and broad model generalization. Evaluated on AgentDojo and VWA-adv benchmarks, our framework achieves 71% and 70% attack success rates against o3-mini and GPT-4o, respectively—nearly doubling baseline performance. Crucially, we demonstrate real-world impact by successfully inducing deployed agents to visit malicious URLs, thereby validating both the practical severity of IPI and the effectiveness of our detection approach.

Technology Category

Application Category

📝 Abstract
The strong planning and reasoning capabilities of Large Language Models (LLMs) have fostered the development of agent-based systems capable of leveraging external tools and interacting with increasingly complex environments. However, these powerful features also introduce a critical security risk: indirect prompt injection, a sophisticated attack vector that compromises the core of these agents, the LLM, by manipulating contextual information rather than direct user prompts. In this work, we propose a generic black-box fuzzing framework, AgentXploit, designed to automatically discover and exploit indirect prompt injection vulnerabilities across diverse LLM agents. Our approach starts by constructing a high-quality initial seed corpus, then employs a seed selection algorithm based on Monte Carlo Tree Search (MCTS) to iteratively refine inputs, thereby maximizing the likelihood of uncovering agent weaknesses. We evaluate AgentXploit on two public benchmarks, AgentDojo and VWA-adv, where it achieves 71% and 70% success rates against agents based on o3-mini and GPT-4o, respectively, nearly doubling the performance of baseline attacks. Moreover, AgentXploit exhibits strong transferability across unseen tasks and internal LLMs, as well as promising results against defenses. Beyond benchmark evaluations, we apply our attacks in real-world environments, successfully misleading agents to navigate to arbitrary URLs, including malicious sites.
Problem

Research questions and friction points this paper is trying to address.

Detect indirect prompt injection vulnerabilities in black-box AI agents
Automate discovery of weaknesses in LLM-based agent systems
Evaluate attack success rates across diverse benchmarks and real-world scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Black-box fuzzing framework for LLM agents
Monte Carlo Tree Search for seed selection
Automated indirect prompt injection vulnerability discovery
🔎 Similar Papers
No similar papers found.