AI Agents for Web Testing: A Case Study in the Wild

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional web testing relies heavily on code coverage and load testing, which often fail to detect usability issues arising from real-user interaction patterns. To address this limitation, this paper introduces WebProber—a novel framework that pioneers the integration of large language model (LLM)-driven AI agents into web usability testing. WebProber autonomously navigates websites, makes context-aware interaction decisions, identifies usability defects, and generates natural-language diagnostic reports. In an empirical study across 120 academic personal websites, WebProber automatically uncovered 29 complex usability issues—previously undetected by conventional testing tools—demonstrating its effectiveness in realistic deployment scenarios. This work establishes a user-behavior-centric paradigm for next-generation web testing and provides a foundational pathway toward AI-augmented software quality assurance.

Technology Category

Application Category

📝 Abstract
Automated web testing plays a critical role in ensuring high-quality user experiences and delivering business value. Traditional approaches primarily focus on code coverage and load testing, but often fall short of capturing complex user behaviors, leaving many usability issues undetected. The emergence of large language models (LLM) and AI agents opens new possibilities for web testing by enabling human-like interaction with websites and a general awareness of common usability problems. In this work, we present WebProber, a prototype AI agent-based web testing framework. Given a URL, WebProber autonomously explores the website, simulating real user interactions, identifying bugs and usability issues, and producing a human-readable report. We evaluate WebProber through a case study of 120 academic personal websites, where it uncovered 29 usability issues--many of which were missed by traditional tools. Our findings highlight agent-based testing as a promising direction while outlining directions for developing next-generation, user-centered testing frameworks.
Problem

Research questions and friction points this paper is trying to address.

Simulating real user interactions to detect usability issues
Identifying bugs missed by traditional web testing tools
Developing AI agent frameworks for user-centered web testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI agent autonomously explores websites
Simulates real user interactions for testing
Identifies bugs and usability issues automatically
🔎 Similar Papers
No similar papers found.