TongUI: Building Generalized GUI Agents by Learning from Multimodal Web Tutorials

πŸ“… 2025-04-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Weak generalization capability of GUI agents, scarcity of cross-platform trajectory data, and high cost of manual annotation hinder progress in GUI automation. To address these challenges, this paper proposes the first automatic GUI interaction trajectory mining and construction paradigm grounded in real-world multimodal web tutorials (video + text-image). Our approach integrates multimodal crawling, fine-tuning of a vision-language model (Qwen2.5-VL), GUI element localization and action modeling, and cross-platform trajectory standardization. This yields GUI-Netβ€”a large-scale, open-source dataset comprising 143K trajectories spanning five operating systems and 200+ applications. Leveraging GUI-Net, we develop TongUI: an end-to-end GUI agent training framework requiring zero human annotation. Evaluated on mainstream GUI grounding and navigation benchmarks, TongUI achieves ~10% average performance gain over prior methods. The GUI-Net dataset, source code, and trained models will be fully open-sourced.

Technology Category

Application Category

πŸ“ Abstract
Building Graphical User Interface (GUI) agents is a promising research direction, which simulates human interaction with computers or mobile phones to perform diverse GUI tasks. However, a major challenge in developing generalized GUI agents is the lack of sufficient trajectory data across various operating systems and applications, mainly due to the high cost of manual annotations. In this paper, we propose the TongUI framework that builds generalized GUI agents by learning from rich multimodal web tutorials. Concretely, we crawl and process online GUI tutorials (such as videos and articles) into GUI agent trajectory data, through which we produce the GUI-Net dataset containing 143K trajectory data across five operating systems and more than 200 applications. We develop the TongUI agent by fine-tuning Qwen2.5-VL-3B/7B models on GUI-Net, which show remarkable performance improvements on commonly used grounding and navigation benchmarks, outperforming baseline agents about 10% on multiple benchmarks, showing the effectiveness of the GUI-Net dataset and underscoring the significance of our TongUI framework. We will fully open-source the code, the GUI-Net dataset, and the trained models soon.
Problem

Research questions and friction points this paper is trying to address.

Lack of diverse GUI trajectory data for training agents
High cost of manual annotation for GUI tasks
Need for generalized GUI agents across systems and apps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning from multimodal web tutorials
Crawling online GUI tutorials into trajectory data
Fine-tuning Qwen2.5-VL models on GUI-Net
πŸ”Ž Similar Papers
No similar papers found.
Bofei Zhang
Bofei Zhang
BIGAI
Z
Zirui Shang
State Key Laboratory of General Artificial Intelligence, BIGAI; Beijing Key Laboratory of Intelligent Information Technology, School of Computer Science & Technology, Beijing Institute of Technology
Z
Zhi Gao
State Key Laboratory of General Artificial Intelligence, BIGAI; School of Intelligence Science and Technology, Peking University
Wang Zhang
Wang Zhang
Tianjin University
Graph Representation Learning
R
Rui Xie
State Key Laboratory of General Artificial Intelligence, BIGAI; Shanghai Jiao Tong University
Xiaojian Ma
Xiaojian Ma
University of California, Los Angeles
Computer VisionMachine LearningGenerative ModelingReinforcement Learning
Tao Yuan
Tao Yuan
University of California, Los Angeles
Computer VisionArtificial Intelligence
X
Xinxiao Wu
Beijing Key Laboratory of Intelligent Information Technology, School of Computer Science & Technology, Beijing Institute of Technology
S
Song-Chun Zhu
State Key Laboratory of General Artificial Intelligence, BIGAI; School of Intelligence Science and Technology, Peking University; Department of Automation, Tsinghua University
Q
Qing Li
State Key Laboratory of General Artificial Intelligence, BIGAI