🤖 AI Summary
Vision-language model (VLM)-driven web agents lack systematic security evaluation methodologies.
Method: This paper proposes the first controllable black-box adversarial attack framework, requiring only black-box access to the target agent. It injects visually imperceptible, appearance-preserving adversarial prompts to induce malicious behaviors (e.g., erroneous financial transactions) and enables substring-level dynamic switching of attack intents for precise target control. Critically, it employs Direct Preference Optimization (DPO) to iteratively refine the adversarial prompt generator using black-box query feedback.
Contribution/Results: Evaluated on GPT-4V-powered state-of-the-art web agents across diverse real-world website tasks, the framework achieves high attack success rates. It is the first work to empirically expose critical vulnerabilities in current VLM agents—specifically, their insufficient prompt robustness and decision-level security deficiencies—thereby establishing a foundational benchmark for evaluating and improving the safety of VLM-based web automation systems.
📝 Abstract
Vision Language Models (VLMs) have revolutionized the creation of generalist web agents, empowering them to autonomously complete diverse tasks on real-world websites, thereby boosting human efficiency and productivity. However, despite their remarkable capabilities, the safety and security of these agents against malicious attacks remain critically underexplored, raising significant concerns about their safe deployment. To uncover and exploit such vulnerabilities in web agents, we provide AdvWeb, a novel black-box attack framework designed against web agents. AdvWeb trains an adversarial prompter model that generates and injects adversarial prompts into web pages, misleading web agents into executing targeted adversarial actions such as inappropriate stock purchases or incorrect bank transactions, actions that could lead to severe real-world consequences. With only black-box access to the web agent, we train and optimize the adversarial prompter model using DPO, leveraging both successful and failed attack strings against the target agent. Unlike prior approaches, our adversarial string injection maintains stealth and control: (1) the appearance of the website remains unchanged before and after the attack, making it nearly impossible for users to detect tampering, and (2) attackers can modify specific substrings within the generated adversarial string to seamlessly change the attack objective (e.g., purchasing stocks from a different company), enhancing attack flexibility and efficiency. We conduct extensive evaluations, demonstrating that AdvWeb achieves high success rates in attacking SOTA GPT-4V-based VLM agent across various web tasks. Our findings expose critical vulnerabilities in current LLM/VLM-based agents, emphasizing the urgent need for developing more reliable web agents and effective defenses. Our code and data are available at https://ai-secure.github.io/AdvWeb/ .