🤖 AI Summary
This work addresses a critical security vulnerability in computer-using agents (CUAs) operating in the visual modality, which are susceptible to adversarial attacks that compromise their decision-making. The paper proposes PRAC, a novel attack method that, for the first time, embeds imperceptible adversarial patches into input images to deliberately manipulate the internal attention mechanisms of multimodal large language models—rather than directly altering model outputs—to implicitly steer the CUA’s preference toward a target choice. Requiring only white-box access to generate adversarial examples, PRAC demonstrates strong transferability even against fine-tuned models. Empirical evaluation in an online shopping scenario confirms its effectiveness and generalizability, as it successfully induces CUAs to select specified products with high reliability.
📝 Abstract
Advancements in multimodal foundation models have enabled the development of Computer Use Agents (CUAs) capable of autonomously interacting with GUI environments. As CUAs are not restricted to certain tools, they allow to automate more complex agentic tasks but at the same time open up new security vulnerabilities. While prior work has concentrated on the language modality, the vulnerability of the vision modality has received less attention. In this paper, we introduce PRAC, a novel attack that, unlike prior work targeting the VLM output directly, manipulates the model's internal preferences by redirecting its attention toward a stealthy adversarial patch. We show that PRAC is able to manipulate the selection process of a CUA on an online shopping platform towards a chosen target product. While we require white-box access to the model for the creation of the attack, we show that our attack generalizes to fine-tuned versions of the same model, presenting a critical threat as multiple companies build specific CUAs based on open weights models.