🤖 AI Summary
This work addresses the vulnerability of mobile vision-language model (VLM)-based agents to environmental injection attacks—such as deceptive notifications and overlay windows—in dynamic GUI environments, which cause perceptual inaccuracies and erroneous actions. Methodologically, we construct the first Android-specific benchmark for evaluating environmental injection robustness: adversarial UI elements are dynamically injected into a real Android emulator; automated assessment is performed via screenshot sequences, action trajectories, and multimodal playback; and a fine-grained failure analysis protocol—leveraging a judge LLM—is introduced to precisely localize failures in perception, recognition, or reasoning. Our contributions include: (1) establishing environmental injection as a novel threat paradigm for mobile VLM agents; (2) developing an executable, reproducible dynamic evaluation framework; and (3) empirically demonstrating severe robustness deficiencies across state-of-the-art VLM-based agents—thereby providing a foundational benchmark and empirical basis for future defense research.
📝 Abstract
Vision-Language Models (VLMs) are increasingly deployed as autonomous agents to navigate mobile graphical user interfaces (GUIs). Operating in dynamic on-device ecosystems, which include notifications, pop-ups, and inter-app interactions, exposes them to a unique and underexplored threat vector: environmental injection. Unlike prompt-based attacks that manipulate textual instructions, environmental injection corrupts an agent's visual perception by inserting adversarial UI elements (for example, deceptive overlays or spoofed notifications) directly into the GUI. This bypasses textual safeguards and can derail execution, causing privacy leakage, financial loss, or irreversible device compromise. To systematically evaluate this threat, we introduce GhostEI-Bench, the first benchmark for assessing mobile agents under environmental injection attacks within dynamic, executable environments. Moving beyond static image-based assessments, GhostEI-Bench injects adversarial events into realistic application workflows inside fully operational Android emulators and evaluates performance across critical risk scenarios. We further propose a judge-LLM protocol that conducts fine-grained failure analysis by reviewing the agent's action trajectory alongside the corresponding screenshot sequence, pinpointing failure in perception, recognition, or reasoning. Comprehensive experiments on state-of-the-art agents reveal pronounced vulnerability to deceptive environmental cues: current models systematically fail to perceive and reason about manipulated UIs. GhostEI-Bench provides a framework for quantifying and mitigating this emerging threat, paving the way toward more robust and secure embodied agents.