🤖 AI Summary
This work addresses the challenge of training GUI agents, which traditionally rely on unsafe real-time interactions or costly human-annotated data and struggle to effectively leverage the implicit knowledge embedded in large language models (LLMs). To overcome this, we propose WebFactory—the first fully automated, closed-loop reinforcement learning framework that efficiently distills internet-scale knowledge from LLMs into executable GUI interaction policies. WebFactory integrates a scalable synthetic web environment, knowledge-aware task generation, LLM-guided trajectory collection, and a decomposed reward mechanism. Remarkably, agents trained on only ten synthetic websites match or surpass state-of-the-art methods that depend on extensive human data, both in offline and online transfer benchmarks, while significantly outperforming the base LLM. These results validate the “embodied potential” of LLMs and establish a new paradigm centered on knowledge compression efficiency.
📝 Abstract
Current paradigms for training GUI agents are fundamentally limited by a reliance on either unsafe, non-reproducible live web interactions or costly, scarce human-crafted data and environments. We argue this focus on data volume overlooks a more critical factor: the efficiency of compressing a large language model's (LLM) latent knowledge into actionable agent behavior. We introduce WebFactory, a novel, fully automated closed-loop reinforcement learning pipeline for GUI agents, systematically compressing LLM-encoded internet intelligence into efficient, grounded actions. Our pipeline features a process of scalable environment synthesis, knowledge-aware task generation, LLM-powered trajectory collection, decomposed reward RL training, and systematic agent evaluation. Remarkably, our agent demonstrates exceptional data efficiency and generalization. Trained on synthetic data from only 10 websites within WebFactory, it achieves performance comparable to GUI agents trained on the same amount of human-annotated data from a much larger set of environments. This superior performance is consistent across our internal offline and online transfer benchmarks, where our agent also significantly outperforms the base foundation model. We further provide critical insights into the"embodiment potential"of different LLM foundations, offering a new axis for model evaluation. This work presents a scalable and cost-effective paradigm for transforming passive internet knowledge into active, grounded intelligence, marking a critical step towards general-purpose interactive agents.