🤖 AI Summary
This study investigates the susceptibility of large language model–driven computer-use agents (CUAs) to dark patterns—such as pre-selected options, misleading framing, and information hiding—in real-world graphical user interface interactions. To this end, we introduce SusBench, the first CUA-specific dark pattern benchmark, comprising 55 live websites and 313 consumer-task scenarios. We faithfully instantiate nine categories of dark patterns in authentic web environments via code injection and validate benchmark efficacy through human evaluation and multi-agent testing. Experimental results demonstrate that five state-of-the-art CUAs exhibit significant vulnerability across multiple implicit manipulative tactics—comparable to human users. Our work establishes the first dedicated dark pattern evaluation framework for CUAs and empirically reveals both the promise and limitations of agents as behavioral proxies for humans. These findings provide foundational evidence and methodological support for enhancing AI trustworthiness, informing regulatory policy, and improving secure human–agent interaction design.
📝 Abstract
As LLM-based computer-use agents (CUAs) begin to autonomously interact with real-world interfaces, understanding their vulnerability to manipulative interface designs becomes increasingly critical. We introduce SusBench, an online benchmark for evaluating the susceptibility of CUAs to UI dark patterns, designs that aim to manipulate or deceive users into taking unintentional actions. Drawing nine common dark pattern types from existing taxonomies, we developed a method for constructing believable dark patterns on real-world consumer websites through code injections, and designed 313 evaluation tasks across 55 websites. Our study with 29 participants showed that humans perceived our dark pattern injections to be highly realistic, with the vast majority of participants not noticing that these had been injected by the research team. We evaluated five state-of-the-art CUAs on the benchmark. We found that both human participants and agents are particularly susceptible to the dark patterns of Preselection, Trick Wording, and Hidden Information, while being resilient to other overt dark patterns. Our findings inform the development of more trustworthy CUAs, their use as potential human proxies in evaluating deceptive designs, and the regulation of an online environment increasingly navigated by autonomous agents.