WASP: Benchmarking Web Agent Security Against Prompt Injection Attacks

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing security evaluations of AI web navigation agents suffer from oversimplified attack objectives, non-malicious instruction injections, and unrealistic attacker privilege assumptions. Method: We propose WASP—the first language-vision web agent security benchmark grounded in realistic scenarios—featuring pragmatically motivated hijacking objectives and rigorous evaluation of indirect prompt injection attacks within fully isolated test environments. WASP systematically assesses the robustness of three state-of-the-art web agents—VisualWebArena, Claude Computer Use, and Operator—under real-world operational constraints. Contribution/Results: Experiments reveal that while agents execute malicious instructions at rates of 16–86%, end-to-end success in achieving adversarial objectives remains low (0–17%), exposing a critical security gap: agents are highly susceptible to misdirection yet resistant to precise control. WASP establishes a reproducible, high-fidelity evaluation framework and empirical foundation for advancing web agent security research.

Technology Category

Application Category

📝 Abstract
Web navigation AI agents use language-and-vision foundation models to enhance productivity but these models are known to be susceptible to indirect prompt injections that get them to follow instructions different from the legitimate user's. Existing explorations of this threat applied to web agents often focus on a single isolated adversarial goal, test with injected instructions that are either too easy or not truly malicious, and often give the adversary unreasonable access. In order to better focus adversarial research, we construct a new benchmark called WASP (Web Agent Security against Prompt injection attacks) that introduces realistic web agent hijacking objectives and an isolated environment to test them in that does not affect real users or the live web. As part of WASP, we also develop baseline attacks against three popular web agentic systems (VisualWebArena, Claude Computer Use, and Operator) instantiated with various state-of-the-art models. Our evaluation shows that even AI agents backed by models with advanced reasoning capabilities and by models with instruction hierarchy mitigations are susceptible to low-effort human-written prompt injections. However, the realistic objectives in WASP also allow us to observe that agents are currently not capable enough to complete the goals of attackers end-to-end. Agents begin executing the adversarial instruction between 16 and 86% of the time but only achieve the goal between 0 and 17% of the time. Based on these findings, we argue that adversarial researchers should demonstrate stronger attacks that more consistently maintain control over the agent given realistic constraints on the adversary's power.
Problem

Research questions and friction points this paper is trying to address.

Assessing web AI agent vulnerability to realistic prompt injection attacks
Developing benchmark for secure web agent testing without live web risks
Evaluating attack success rates on current AI agent systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed WASP benchmark for realistic web agent hijacking
Tested baseline attacks on three web agent systems
Evaluated agent susceptibility to human-written prompt injections
🔎 Similar Papers
No similar papers found.