AgentTrek: Agent Trajectory Synthesis via Guiding Replay with Web Tutorials

๐Ÿ“… 2024-12-12
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
High-quality multi-step GUI interaction trajectories for training GUI agents are scarce and prohibitively expensive to annotate manually. Method: This paper proposes a web-tutorial-based automated trajectory synthesis framework: (1) crawling open-source online tutorials and parsing them into structured, multi-step task specifications; (2) orchestrating a vision-language model (VLM) agent to execute tasks and generate trajectories in real GUI environments; and (3) employing a VLM-based evaluator for end-to-end automatic trajectory validation. We introduce โ€œguided replayโ€โ€”the first paradigm enabling fully automated conversion of unstructured textual tutorials into executable, verifiable GUI trajectories without human annotation. Contribution/Results: Experiments demonstrate that synthesized trajectories significantly improve agent performance in GUI element localization and multi-step planning, outperforming prior methods across multiple benchmarks. Moreover, the per-trajectory data cost is reduced by over an order of magnitude, enabling scalable, low-cost GUI agent training.

Technology Category

Application Category

๐Ÿ“ Abstract
Graphical User Interface (GUI) agents hold great potential for automating complex tasks across diverse digital environments, from web applications to desktop software. However, the development of such agents is hindered by the lack of high-quality, multi-step trajectory data required for effective training. Existing approaches rely on expensive and labor-intensive human annotation, making them unsustainable at scale. To address this challenge, we propose AgentTrek, a scalable data synthesis pipeline that generates high-quality GUI agent trajectories by leveraging web tutorials. Our method automatically gathers tutorial-like texts from the internet, transforms them into task goals with step-by-step instructions, and employs a visual-language model agent to simulate their execution in a real digital environment. A VLM-based evaluator ensures the correctness of the generated trajectories. We demonstrate that training GUI agents with these synthesized trajectories significantly improves their grounding and planning performance over the current models. Moreover, our approach is more cost-efficient compared to traditional human annotation methods. This work underscores the potential of guided replay with web tutorials as a viable strategy for large-scale GUI agent training, paving the way for more capable and autonomous digital agents.
Problem

Research questions and friction points this paper is trying to address.

Generates web agent trajectories using web tutorials.
Reduces data collection costs without human annotation.
Enhances GUI agent performance with multimodal data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated web tutorial harvesting and filtering
Structured task specifications from tutorials
Visual-language model for trajectory execution