π€ AI Summary
This work addresses the challenge of mounting effective backdoor attacks on screenshot-based mobile GUI agents, which existing methods struggle to achieve due to limited trigger design space, system-level interference, and conflicts arising from multiple trigger-action mappings. To overcome these limitations, we propose AgentRAE, the first approach that leverages seemingly benign application icons in the notification bar as highly stealthy, natural triggers to enable remote command hijacking through a two-stage pipeline. Our method employs contrastive learning to enhance the agentβs sensitivity to subtle icon variations and fine-tunes the model with a backdoor objective to precisely bind triggers to target actions. Experiments demonstrate that AgentRAE achieves over 90% attack success rate across ten mobile interaction tasks while preserving clean-sample performance, remains visually imperceptible, and effectively evades eight state-of-the-art defense mechanisms.
π Abstract
The rapid adoption of mobile graphical user interface (GUI) agents, which autonomously control applications and operating systems (OS), exposes new system-level attack surfaces. Existing backdoors against web GUI agents and general GenAI models rely on environmental injection or deceptive pop-ups to mislead the agent operation. However, these techniques do not work on screenshots-based mobile GUI agents due to the challenges of restricted trigger design spaces, OS background interference, and conflicts in multiple trigger-action mappings. We propose AgentRAE, a novel backdoor attack capable of inducing Remote Action Execution in mobile GUI agents using visually natural triggers (e.g., benign app icons in notifications). To address the underfitting caused by natural triggers and achieve accurate multi-target action redirection, we design a novel two-stage pipeline that first enhances the agent's sensitivity to subtle iconographic differences via contrastive learning, and then associates each trigger with a specific mobile GUI agent action through a backdoor post-training. Our extensive evaluation reveals that the proposed backdoor preserves clean performance with an attack success rate of over 90% across ten mobile operations. Furthermore, it is hard to visibly detect the benign-looking triggers and circumvents eight representative state-of-the-art defenses. These results expose an overlooked backdoor vector in mobile GUI agents, underscoring the need for defenses that scrutinize notification-conditioned behaviors and internal agent representations.