LearnAct: Few-Shot Mobile GUI Agent with a Unified Demonstration Benchmark

📅 2025-04-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mobile GUI agents exhibit poor generalization in real-world scenarios and rely heavily on large-scale training data. Method: This paper proposes a demonstration-driven few-shot learning paradigm that replaces massive datasets with human demonstrations. We introduce LearnGUI—the first dual-mode (offline + online) mobile GUI demonstration benchmark—and design LearnAct, a multi-agent framework comprising DemoParser (demonstration parsing), KnowSeeker (knowledge retrieval), and ActExecutor (action execution) modules to enable automatic extraction and cross-application, cross-task transfer of demonstration knowledge. Results: A single demonstration significantly improves performance: offline accuracy of Gemini-1.5-Pro rises from 19.3% to 51.7%, and online task success rate of UI-TARS-7B-SFT increases from 18.1% to 32.8%. This work establishes a novel pathway for few-shot demonstration learning in mobile GUIs and provides both a benchmark and architectural foundation for lightweight, generalizable GUI agents.

Technology Category

Application Category

📝 Abstract
Mobile GUI agents show promise in automating tasks but face generalization challenges in diverse real-world scenarios. Traditional approaches using pre-training or fine-tuning with massive datasets struggle with the diversity of mobile applications and user-specific tasks. We propose enhancing mobile GUI agent capabilities through human demonstrations, focusing on improving performance in unseen scenarios rather than pursuing universal generalization through larger datasets. To realize this paradigm, we introduce LearnGUI, the first comprehensive dataset specifically designed for studying demonstration-based learning in mobile GUI agents, comprising 2,252 offline tasks and 101 online tasks with high-quality human demonstrations. We further develop LearnAct, a sophisticated multi-agent framework that automatically extracts knowledge from demonstrations to enhance task completion. This framework integrates three specialized agents: DemoParser for knowledge extraction, KnowSeeker for relevant knowledge retrieval, and ActExecutor for demonstration-enhanced task execution. Our experimental results show significant performance gains in both offline and online evaluations. In offline assessments, a single demonstration improves model performance, increasing Gemini-1.5-Pro's accuracy from 19.3% to 51.7%. In online evaluations, our framework enhances UI-TARS-7B-SFT's task success rate from 18.1% to 32.8%. LearnAct framework and LearnGUI benchmark establish demonstration-based learning as a promising direction for more adaptable, personalized, and deployable mobile GUI agents.
Problem

Research questions and friction points this paper is trying to address.

Enhancing mobile GUI agents' performance in unseen scenarios
Overcoming generalization challenges with diverse mobile applications
Improving task success rates using human demonstrations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LearnGUI dataset for demonstration-based learning
Multi-agent framework LearnAct for knowledge extraction
DemoParser, KnowSeeker, ActExecutor agents integration
🔎 Similar Papers
No similar papers found.