AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents

📅 2024-05-23
🏛️ arXiv.org
📈 Citations: 33
Influential: 13
📄 PDF
🤖 AI Summary
Autonomous agents lack reproducible, dynamic benchmarks for evaluating human task execution on real mobile devices. Method: This paper introduces the first dynamic, reproducible Android-oriented benchmark environment, comprising 116 parameterized, natural-language-described tasks across 20 mainstream apps, with full initialization, success verification, and environment cleanup protocols. It innovatively integrates dynamic task generation, system-level state awareness, and controllable environmental intervention to overcome static test-set limitations; implements a custom Android simulation platform featuring programmatic orchestration, natural-language interfaces, and a multi-agent evaluation framework. Results: Experimental evaluation reveals that the best baseline agent achieves only 30.6% task completion; cross-platform transfer degrades performance significantly; and minor task perturbations induce substantial performance fluctuations—demonstrating the critical need for dynamic assessment.

Technology Category

Application Category

📝 Abstract
Autonomous agents that execute human tasks by controlling computers can enhance human productivity and application accessibility. However, progress in this field will be driven by realistic and reproducible benchmarks. We present AndroidWorld, a fully functional Android environment that provides reward signals for 116 programmatic tasks across 20 real-world Android apps. Unlike existing interactive environments, which provide a static test set, AndroidWorld dynamically constructs tasks that are parameterized and expressed in natural language in unlimited ways, thus enabling testing on a much larger and more realistic suite of tasks. To ensure reproducibility, each task includes dedicated initialization, success-checking, and tear-down logic, which modifies and inspects the device's system state. We experiment with baseline agents to test AndroidWorld and provide initial results on the benchmark. Our best agent can complete 30.6% of AndroidWorld's tasks, leaving ample room for future work. Furthermore, we adapt a popular desktop web agent to work on Android, which we find to be less effective on mobile, suggesting future research is needed to achieve universal, cross-platform agents. Finally, we also conduct a robustness analysis, showing that task variations can significantly affect agent performance, demonstrating that without such testing, agent performance metrics may not fully reflect practical challenges. AndroidWorld and the experiments in this paper are available at github.com/google-research/android_world.
Problem

Research questions and friction points this paper is trying to address.

Develops AndroidWorld for benchmarking autonomous agents' task performance
Enables dynamic, parameterized tasks for realistic agent testing
Assesses cross-platform agent adaptability and robustness challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic task construction with natural language
Reproducible benchmarks with system state control
Cross-platform agent adaptation for mobile
🔎 Similar Papers
No similar papers found.
C
Christopher Rawles
Google DeepMind
S
Sarah Clinckemaillie
Google
Y
Yifan Chang
Google
J
Jonathan Waltz
Google
G
Gabrielle Lau
Google
M
Marybeth Fair
Google
A
Alice Li
Google DeepMind
W
Will Bishop
Google DeepMind
W
Wei Li
Google DeepMind
F
Folawiyo Campbell-Ajala
Google DeepMind
D
Daniel Toyama
Google DeepMind
R
Robert Berry
Google DeepMind
D
Divya Tyamagundlu
Google
T
Timothy Lillicrap
Google DeepMind
Oriana Riva
Oriana Riva
Google Research
NLPAIMobile systems