🤖 AI Summary
Computer-using agents (CUAs) suffer from scarce real-world application training data and poor generalization of existing synthetic data due to visual distortion and domain mismatch.
Method: This paper introduces a novel paradigm for constructing executable UI operation trajectories from internet instructional videos. Its core innovation is a reverse-dynamics modeling framework that recasts video action recognition as screen-state transition prediction—eliminating hand-crafted rules and enhancing cross-application generalization. The approach integrates task-aware video retrieval, an automated annotation pipeline, and a vision-to-action mapping model combining supervised learning with in-context learning.
Contribution/Results: We release the first large-scale, high-quality, open-source web-video-driven UI trajectory dataset (53K+ trajectories). On the OSWorld benchmark, our method significantly improves performance of both proprietary and open-source CUAs, empirically validating instructional web videos as a scalable, high-fidelity training resource for UI automation.
📝 Abstract
Computer use agents (CUAs) need to plan task workflows grounded in diverse, ever-changing applications and environments, but learning is hindered by the scarcity of large-scale, high-quality training data in the target application. Existing datasets are domain-specific, static, and costly to annotate, while current synthetic data generation methods often yield simplistic or misaligned task demonstrations. To address these limitations, we introduce Watch & Learn (W&L), a framework that converts human demonstration videos readily available on the Internet into executable UI trajectories at scale. Instead of directly generating trajectories or relying on ad hoc reasoning heuristics, we cast the problem as an inverse dynamics objective: predicting the user's action from consecutive screen states. This formulation reduces manual engineering, is easier to learn, and generalizes more robustly across applications. Concretely, we develop an inverse dynamics labeling pipeline with task-aware video retrieval, generate over 53k high-quality trajectories from raw web videos, and demonstrate that these trajectories improve CUAs both as in-context demonstrations and as supervised training data. On the challenging OSWorld benchmark, UI trajectories extracted with W&L consistently enhance both general-purpose and state-of-the-art frameworks in-context, and deliver stronger gains for open-source models under supervised training. These results highlight web-scale human demonstration videos as a practical and scalable foundation for advancing CUAs towards real-world deployment.