Dark Patterns Meet GUI Agents: LLM Agent Susceptibility to Manipulative Interfaces and the Role of Human Oversight

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study presents the first systematic evaluation of 16 dark patterns’ susceptibility in LLM-driven GUI agents, exposing their fundamental failure to detect interface manipulation and their over-prioritization of task completion over interface integrity. We employ a two-stage empirical methodology: (1) isolated agent behavior testing against human judgments of deception, and (2) analysis of human-AI collaboration—measuring detection accuracy, visual attention allocation, and cognitive load. Results reveal that agents lack intent inference capabilities; humans rely heavily on cognitive heuristics, impairing dark pattern recognition; and while human-AI collaboration improves detection accuracy by +23%, it induces attentional narrowing and supervisory fatigue. Our core contributions are threefold: (i) formal characterization of GUI agents’ unique failure modes at the interface ethics level; (ii) identification of three design principles—transparent decision tracing, adjustable autonomy, and layered supervision—and (iii) theoretical and practical foundations for enhancing interactional robustness in trustworthy AI agents.

Technology Category

Application Category

📝 Abstract
The dark patterns, deceptive interface designs manipulating user behaviors, have been extensively studied for their effects on human decision-making and autonomy. Yet, with the rising prominence of LLM-powered GUI agents that automate tasks from high-level intents, understanding how dark patterns affect agents is increasingly important. We present a two-phase empirical study examining how agents, human participants, and human-AI teams respond to 16 types of dark patterns across diverse scenarios. Phase 1 highlights that agents often fail to recognize dark patterns, and even when aware, prioritize task completion over protective action. Phase 2 revealed divergent failure modes: humans succumb due to cognitive shortcuts and habitual compliance, while agents falter from procedural blind spots. Human oversight improved avoidance but introduced costs such as attentional tunneling and cognitive load. Our findings show neither humans nor agents are uniformly resilient, and collaboration introduces new vulnerabilities, suggesting design needs for transparency, adjustable autonomy, and oversight.
Problem

Research questions and friction points this paper is trying to address.

Examining LLM agent susceptibility to manipulative dark patterns
Comparing human and AI responses to deceptive interface designs
Investigating human oversight impact on dark pattern avoidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Empirical study on agent susceptibility
Human oversight improves dark pattern avoidance
Design needs for transparency and autonomy