D-GARA: A Dynamic Benchmarking Framework for GUI Agent Robustness in Real-World Anomalies

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GUI agent benchmarks rely on static, idealized environments and thus fail to evaluate robustness under realistic anomalies—such as permission dialogs or low-battery warnings—that frequently disrupt interaction on mobile platforms. Method: We propose RoboBench, the first dynamic anomaly evaluation framework for Android, which employs runtime injection techniques to embed diverse, realistic disturbances into live applications and constructs a benchmark dataset with fine-grained anomaly annotations. The framework supports extensible definitions of both tasks and anomaly types. Contribution/Results: Experiments reveal that state-of-the-art GUI agents suffer an average >40% drop in task success rate under anomalies, exposing critical robustness gaps. This work not only empirically validates the necessity of anomaly-robust training but also delivers the first reproducible, extensible evaluation platform for real-world GUI anomaly resilience.

Technology Category

Application Category

📝 Abstract
Developing intelligent agents capable of operating a wide range of Graphical User Interfaces (GUIs) with human-level proficiency is a key milestone on the path toward Artificial General Intelligence. While most existing datasets and benchmarks for training and evaluating GUI agents are static and idealized, failing to reflect the complexity and unpredictability of real-world environments, particularly the presence of anomalies. To bridge this research gap, we propose D-GARA, a dynamic benchmarking framework, to evaluate Android GUI agent robustness in real-world anomalies. D-GARA introduces a diverse set of real-world anomalies that GUI agents commonly face in practice, including interruptions such as permission dialogs, battery warnings, and update prompts. Based on D-GARA framework, we construct and annotate a benchmark featuring commonly used Android applications with embedded anomalies to support broader community research. Comprehensive experiments and results demonstrate substantial performance degradation in state-of-the-art GUI agents when exposed to anomaly-rich environments, highlighting the need for robustness-aware learning. D-GARA is modular and extensible, supporting the seamless integration of new tasks, anomaly types, and interaction scenarios to meet specific evaluation goals.
Problem

Research questions and friction points this paper is trying to address.

Evaluating GUI agent robustness against real-world anomalies in Android interfaces
Addressing performance degradation caused by interruptions like permission dialogs
Providing dynamic benchmarking framework for anomaly-rich environment testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic benchmarking framework for GUI agent robustness
Introduces diverse real-world anomalies in Android environments
Modular extensible design supports new tasks and scenarios
🔎 Similar Papers
No similar papers found.