VPI-Bench: Visual Prompt Injection Attacks for Computer-Use Agents

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual Prompt Injection (VPI) poses a critical security threat to Computer-Using Agents (CUAs) and Browser-Using Agents (BUAs), yet prior work has largely overlooked UI-layer vulnerabilities in CUAs. Method: This work formally defines the VPI attack paradigm for the first time and introduces VPI-Bench—the first benchmark tailored to CUAs—comprising 306 interactive, realistically rendered test cases across five major platforms. It employs web-rendering hijacking, visual instruction steganography, dynamic DOM injection, and multi-platform sandboxed deployment, with malicious UI variants generated via human–LLM collaboration. Results: Experiments reveal up to 51% success rates against CUAs and 100% against BUAs; conventional defenses (e.g., system prompts) yield marginal improvements (<5%). The study underscores the urgent need for context-aware defenses and establishes foundational tools and empirical evidence for CUA security evaluation and mitigation.

Technology Category

Application Category

📝 Abstract
Computer-Use Agents (CUAs) with full system access enable powerful task automation but pose significant security and privacy risks due to their ability to manipulate files, access user data, and execute arbitrary commands. While prior work has focused on browser-based agents and HTML-level attacks, the vulnerabilities of CUAs remain underexplored. In this paper, we investigate Visual Prompt Injection (VPI) attacks, where malicious instructions are visually embedded within rendered user interfaces, and examine their impact on both CUAs and Browser-Use Agents (BUAs). We propose VPI-Bench, a benchmark of 306 test cases across five widely used platforms, to evaluate agent robustness under VPI threats. Each test case is a variant of a web platform, designed to be interactive, deployed in a realistic environment, and containing a visually embedded malicious prompt. Our empirical study shows that current CUAs and BUAs can be deceived at rates of up to 51% and 100%, respectively, on certain platforms. The experimental results also indicate that system prompt defenses offer only limited improvements. These findings highlight the need for robust, context-aware defenses to ensure the safe deployment of multimodal AI agents in real-world environments. The code and dataset are available at: https://github.com/cua-framework/agents
Problem

Research questions and friction points this paper is trying to address.

Investigates Visual Prompt Injection attacks on Computer-Use Agents
Assesses agent vulnerabilities across five widely used platforms
Highlights limited effectiveness of current system prompt defenses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Prompt Injection attacks on interfaces
VPI-Bench benchmark for agent robustness
Limited effectiveness of system prompt defenses
🔎 Similar Papers
No similar papers found.