Are GUI Agents Focused Enough? Automated Distraction via Semantic-level UI Element Injection

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current red-teaming approaches for GUI agents often rely on white-box access or are easily mitigated by alignment mechanisms, limiting their ability to realistically assess agent robustness. This work proposes a semantic-level UI element injection attack that, under black-box conditions, overlays seemingly benign yet misleading visual elements onto interface screenshots via an Editor-Overlapper-Victim pipeline combined with an iterative search strategy. The method achieves the first effective visual perturbation against mainstream GUI agents, exposing a cross-model vulnerability in attention mechanisms and demonstrating that injected elements can persistently induce erroneous click behaviors. Experiments show the attack increases success rates by up to 4.4× compared to baselines; following a successful attack, target models click on malicious elements with a probability exceeding 15%, significantly higher than the 1% observed with random injections.
📝 Abstract
Existing red-teaming studies on GUI agents have important limitations. Adversarial perturbations typically require white-box access, which is unavailable for commercial systems, while prompt injection is increasingly mitigated by stronger safety alignment. To study robustness under a more practical threat model, we propose Semantic-level UI Element Injection, a red-teaming setting that overlays safety-aligned and harmless UI elements onto screenshots to misdirect the agent's visual grounding. Our method uses a modular Editor-Overlapper-Victim pipeline and an iterative search procedure that samples multiple candidate edits, keeps the best cumulative overlay, and adapts future prompt strategies based on previous failures. Across five victim models, our optimized attacks improve attack success rate by up to 4.4x over random injection on the strongest victims. Moreover, elements optimized on one source model transfer effectively to other target models, indicating model-agnostic vulnerabilities. After the first successful attack, the victim still clicks the attacker-controlled element in more than 15% of later independent trials, versus below 1% for random injection, showing that the injected element acts as a persistent attractor rather than simple visual clutter.
Problem

Research questions and friction points this paper is trying to address.

GUI agents
red-teaming
semantic-level UI element injection
visual grounding
adversarial robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic-level UI Element Injection
GUI agents
red-teaming
visual grounding
adversarial transferability
W
Wenkui Yang
SAIS, UCAS; MAIS&NLPR, CASIA; SIST, ShanghaiTech University
C
Chao Jin
MAIS&NLPR, CASIA
H
Haisu Zhu
MAIS&NLPR, CASIA; SIST, ShanghaiTech University
W
Weilin Luo
Huawei Noah’s Ark Lab
D
Derek Yuen
Huawei Noah’s Ark Lab
Kun Shao
Kun Shao
Huawei
AI Agentreinforcement learningmulti-agent systemsembodied AIgame AI
Huaibo Huang
Huaibo Huang
NLPR, MAIS, CASIA
Computer VisionGenerative ModelsLow-level VisionFace Recognition
Junxian Duan
Junxian Duan
Institute of Automation, Chinese Academy of Sciences
computer vision
Jie Cao
Jie Cao
Institute of Automation, Chinese Academy of Sciences
Computer Vision
R
Ran He
SAIS, UCAS; MAIS&NLPR, CASIA; SIST, ShanghaiTech University