🤖 AI Summary
Existing UI agent evaluations rely on static application environments, failing to reflect reliability under real-world deployment conditions where interfaces and content dynamically evolve. To address this, we propose OpenApps—a lightweight, open-source evaluation ecosystem comprising six configurable mobile applications supporting large-scale, automated, distributed testing. Leveraging frontend rendering techniques, OpenApps rapidly generates thousands of application variants, enabling the first systematic quantification of environmental variability’s impact on agent performance. Across over 10,000 independent evaluations, seven state-of-the-art multimodal agents exhibit task success rate fluctuations exceeding 50% across application versions; some drop precipitously from 63% to 4%, revealing critical robustness deficiencies. This work breaks from static evaluation paradigms, establishing a dynamic, scalable benchmark for rigorously assessing UI agent reliability.
📝 Abstract
Reliability is key to realizing the promise of autonomous UI-Agents, multimodal agents that directly interact with apps in the same manner as humans, as users must be able to trust an agent to complete a given task. Current evaluations rely on fixed environments, often clones of existing apps, which are limited in that they can only shed light on whether or how often an agent can complete a task within a specific environment. When deployed however, agents are likely to encounter variations in app design and content that can affect an agent's ability to complete a task. To address this blind spot of measuring agent reliability across app variations, we develop OpenApps, a light-weight open-source ecosystem with six apps (messenger, calendar, maps, etc.) that are configurable in appearance and content. OpenApps requires just a single CPU to run, enabling easy generation and deployment of thousands of versions of each app. Specifically, we run more than 10,000 independent evaluations to study reliability across seven leading multimodal agents. We find that while standard reliability within a fixed app is relatively stable, reliability can vary drastically when measured across app variations. Task success rates for many agents can fluctuate by more than $50%$ across app variations. For example, Kimi-VL-3B's average success across all tasks fluctuates from $63%$ to just $4%$ across app versions. We also find agent behaviors such as looping or hallucinating actions can differ drastically depending on the environment configuration. These initial findings highlight the importance of measuring reliability along this new dimension of app variations. OpenApps is available at https://facebookresearch.github.io/OpenApps/