DigiData: Training and Evaluating General-Purpose Mobile Control Agents

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mobile UI-controlling agents suffer from a scarcity of high-quality multimodal data and realistic, scenario-based evaluation benchmarks. Method: This paper introduces a systematic, functionality-driven task construction approach to significantly enhance task diversity and complexity; designs a dynamic evaluation protocol and a large-language-model-augmented AI evaluation framework to overcome the limitations of conventional step-wise accuracy metrics; and integrates multimodal data collection, real-device interaction trajectory recording, and a hybrid training framework combining deep reinforcement learning and supervised learning. Contribution/Results: We open-source DigiData—a large-scale mobile interaction dataset—and DigiData-Bench—a corresponding benchmark suite. Empirical results demonstrate that our evaluation methodology achieves superior effectiveness and reliability in performance measurement, establishing a critical data foundation and a novel evaluation paradigm for developing general-purpose mobile UI agents.

Technology Category

Application Category

📝 Abstract
AI agents capable of controlling user interfaces have the potential to transform human interaction with digital devices. To accelerate this transformation, two fundamental building blocks are essential: high-quality datasets that enable agents to achieve complex and human-relevant goals, and robust evaluation methods that allow researchers and practitioners to rapidly enhance agent performance. In this paper, we introduce DigiData, a large-scale, high-quality, diverse, multi-modal dataset designed for training mobile control agents. Unlike existing datasets, which derive goals from unstructured interactions, DigiData is meticulously constructed through comprehensive exploration of app features, resulting in greater diversity and higher goal complexity. Additionally, we present DigiData-Bench, a benchmark for evaluating mobile control agents on real-world complex tasks. We demonstrate that the commonly used step-accuracy metric falls short in reliably assessing mobile control agents and, to address this, we propose dynamic evaluation protocols and AI-powered evaluations as rigorous alternatives for agent assessment. Our contributions aim to significantly advance the development of mobile control agents, paving the way for more intuitive and effective human-device interactions.
Problem

Research questions and friction points this paper is trying to address.

Creating high-quality datasets for training mobile control agents on complex tasks
Developing robust evaluation methods to reliably assess mobile control agents
Advancing general-purpose AI agents for intuitive human-device interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset with comprehensive app feature exploration
Benchmark for real-world mobile control tasks
Dynamic and AI-powered evaluation protocols
🔎 Similar Papers
No similar papers found.
Y
Yuxuan Sun
FAIR at Meta
M
Manchen Wang
FAIR at Meta
Shengyi Qian
Shengyi Qian
Research Scientist, Meta FAIR
Computer VisionVision Language ModelRobotics
W
William R. Wong
FAIR at Meta
E
Eric Gan
FAIR at Meta
P
P. D'Oro
FAIR at Meta
A
Alejandro Castillejo Munoz
FAIR at Meta
S
S. Silwal
FAIR at Meta
P
Pedro Matias
Meta Reality Labs
Nitin Kamra
Nitin Kamra
Meta Reality Labs
Satwik Kottur
Satwik Kottur
Research Scientist, Facebook AI
Computer VisionImage ProcessingMachine LearningNatural Language Processing
N
Nick Raines
Meta Reality Labs
X
Xuanyi Zhao
FAIR at Meta
J
Joy Chen
FAIR at Meta
J
Joseph Greer
FAIR at Meta
Andrea Madotto
Andrea Madotto
Research Scientist at FAIR
Multimodal LLMsVLMsNLPDialogue SystemsConversational AI
A
Allen Bolourchi
FAIR at Meta, University of Southern California
J
James Valori
FAIR at Meta
Kevin Carlberg
Kevin Carlberg
Meta Reality Labs
Karl Ridgeway
Karl Ridgeway
Facebook
Factorial RepresentationsFew-shot learningdeep embeddings
Joseph Tighe
Joseph Tighe
Meta
Human UnderstandingAction RecognitionDetectionVideo