🤖 AI Summary
Mobile UI-controlling agents suffer from a scarcity of high-quality multimodal data and realistic, scenario-based evaluation benchmarks. Method: This paper introduces a systematic, functionality-driven task construction approach to significantly enhance task diversity and complexity; designs a dynamic evaluation protocol and a large-language-model-augmented AI evaluation framework to overcome the limitations of conventional step-wise accuracy metrics; and integrates multimodal data collection, real-device interaction trajectory recording, and a hybrid training framework combining deep reinforcement learning and supervised learning. Contribution/Results: We open-source DigiData—a large-scale mobile interaction dataset—and DigiData-Bench—a corresponding benchmark suite. Empirical results demonstrate that our evaluation methodology achieves superior effectiveness and reliability in performance measurement, establishing a critical data foundation and a novel evaluation paradigm for developing general-purpose mobile UI agents.
📝 Abstract
AI agents capable of controlling user interfaces have the potential to transform human interaction with digital devices. To accelerate this transformation, two fundamental building blocks are essential: high-quality datasets that enable agents to achieve complex and human-relevant goals, and robust evaluation methods that allow researchers and practitioners to rapidly enhance agent performance. In this paper, we introduce DigiData, a large-scale, high-quality, diverse, multi-modal dataset designed for training mobile control agents. Unlike existing datasets, which derive goals from unstructured interactions, DigiData is meticulously constructed through comprehensive exploration of app features, resulting in greater diversity and higher goal complexity. Additionally, we present DigiData-Bench, a benchmark for evaluating mobile control agents on real-world complex tasks. We demonstrate that the commonly used step-accuracy metric falls short in reliably assessing mobile control agents and, to address this, we propose dynamic evaluation protocols and AI-powered evaluations as rigorous alternatives for agent assessment. Our contributions aim to significantly advance the development of mobile control agents, paving the way for more intuitive and effective human-device interactions.