WebFactory: Automated Compression of Foundational Language Intelligence into Grounded Web Agents

📅 2026-03-05
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of training GUI agents, which traditionally rely on unsafe real-time interactions or costly human-annotated data and struggle to effectively leverage the implicit knowledge embedded in large language models (LLMs). To overcome this, we propose WebFactory—the first fully automated, closed-loop reinforcement learning framework that efficiently distills internet-scale knowledge from LLMs into executable GUI interaction policies. WebFactory integrates a scalable synthetic web environment, knowledge-aware task generation, LLM-guided trajectory collection, and a decomposed reward mechanism. Remarkably, agents trained on only ten synthetic websites match or surpass state-of-the-art methods that depend on extensive human data, both in offline and online transfer benchmarks, while significantly outperforming the base LLM. These results validate the “embodied potential” of LLMs and establish a new paradigm centered on knowledge compression efficiency.

Technology Category

Application Category

📝 Abstract
Current paradigms for training GUI agents are fundamentally limited by a reliance on either unsafe, non-reproducible live web interactions or costly, scarce human-crafted data and environments. We argue this focus on data volume overlooks a more critical factor: the efficiency of compressing a large language model's (LLM) latent knowledge into actionable agent behavior. We introduce WebFactory, a novel, fully automated closed-loop reinforcement learning pipeline for GUI agents, systematically compressing LLM-encoded internet intelligence into efficient, grounded actions. Our pipeline features a process of scalable environment synthesis, knowledge-aware task generation, LLM-powered trajectory collection, decomposed reward RL training, and systematic agent evaluation. Remarkably, our agent demonstrates exceptional data efficiency and generalization. Trained on synthetic data from only 10 websites within WebFactory, it achieves performance comparable to GUI agents trained on the same amount of human-annotated data from a much larger set of environments. This superior performance is consistent across our internal offline and online transfer benchmarks, where our agent also significantly outperforms the base foundation model. We further provide critical insights into the"embodiment potential"of different LLM foundations, offering a new axis for model evaluation. This work presents a scalable and cost-effective paradigm for transforming passive internet knowledge into active, grounded intelligence, marking a critical step towards general-purpose interactive agents.
Problem

Research questions and friction points this paper is trying to address.

GUI agents
data efficiency
embodied intelligence
LLM compression
web automation
Innovation

Methods, ideas, or system contributions that make the work stand out.

WebFactory
LLM compression
grounded web agents
automated RL pipeline
embodiment potential
🔎 Similar Papers
No similar papers found.
S
Sicheng Fan
Fudan University
Q
Qingyun Shi
Fudan University
S
Shengze Xu
The Chinese University of Hong Kong
S
Shengbo Cai
IMean AI
Tieyong Zeng
Tieyong Zeng
Professor, Director of CMAI, Department of Mathematics, The Chinese University of Hong Kong
Data science
Li Ling
Li Ling
KTH - Royal Institute of Technology
computer visiondeep learningroboticsautonomous navigation
Y
Yanyi Shang
IMean AI
D
Dehan Kong
IMean AI