WebSP-Eval: Evaluating Web Agents on Website Security and Privacy Tasks

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of systematic evaluation frameworks for web agents in user-facing security and privacy tasks, such as managing cookies and configuring privacy settings. To bridge this gap, we propose WebSP-Eval, the first benchmark suite tailored to these tasks, comprising 200 tasks across 28 real-world websites, a Chrome extension-based agent system supporting account and state management, and an automated evaluator. Using this framework, we evaluate eight state-of-the-art web agents and find that they exhibit failure rates exceeding 45% on tasks involving stateful UI elements like toggles and checkboxes, revealing significant limitations in autonomous exploration. Our study not only uncovers critical bottlenecks in current agent capabilities but also provides fine-grained analytical dimensions and a reproducible evaluation infrastructure for future research.
📝 Abstract
Web agents automate browser tasks, ranging from simple form completion to complex workflows like ordering groceries. While current benchmarks evaluate general-purpose performance~(e.g., WebArena) or safety against malicious actions~(e.g., SafeArena), no existing framework assesses an agent's ability to successfully execute user-facing website security and privacy tasks, such as managing cookie preferences, configuring privacy-sensitive account settings, or revoking inactive sessions. To address this gap, we introduce WebSP-Eval, an evaluation framework for measuring web agent performance on website security and privacy tasks. WebSP-Eval comprises 1) a manually crafted task dataset of 200 task instances across 28 websites; 2) a robust agentic system supporting account and initial state management across runs using a custom Google Chrome extension; and 3) an automated evaluator. We evaluate a total of 8 web agent instantiations using state-of-the-art multimodal large language models, conducting a fine-grained analysis across websites, task categories, and UI elements. Our evaluation reveals that current models suffer from limited autonomous exploration capabilities to reliably solve website security and privacy tasks, and struggle with specific task categories and websites. Crucially, we identify stateful UI elements such as toggles and checkboxes are a primary reason for agent failure, failing at a rate of more than 45\% in tasks containing these elements across many models.
Problem

Research questions and friction points this paper is trying to address.

web agents
security and privacy tasks
evaluation framework
stateful UI elements
autonomous exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

WebSP-Eval
web agents
security and privacy tasks
stateful UI elements
automated evaluation
🔎 Similar Papers
No similar papers found.
G
Guruprasad Viswanathan Ramesh
University of Wisconsin-Madison
A
Asmit Nayak
University of Wisconsin-Madison
B
Basieem Siddique
University of Wisconsin-Madison
Kassem Fawaz
Kassem Fawaz
University of Wisconsin-Madison
Mobile SystemsInternet of ThingsUsable Security and PrivacyLocation Privacy