🤖 AI Summary
Current AI agents remain significantly inferior to humans on tasks requiring domain-specific procedural knowledge—such as multi-step GUI interactions—where humans can rapidly acquire skills from video tutorials.
Method: We propose the first agent framework enabling *online video learning at inference time*. It employs a two-stage dynamic trajectory selection mechanism grounded in a vision-language model (VLM) to jointly perform video retrieval, action inference, temporal segmentation, and subgoal annotation—converting raw instructional videos into structured demonstration trajectories that are dynamically integrated into the agent’s execution context.
Contribution/Results: This end-to-end “video-to-action” paradigm enables real-time, on-the-fly procedural learning from open video resources—a capability previously unrealized. Evaluated on two major benchmarks, our approach substantially outperforms strong baselines, especially on complex multi-step tasks, empirically validating video as a viable and effective external knowledge source for runtime agent adaptation.
📝 Abstract
Computer-use agents can operate computers and automate laborious tasks, but despite recent rapid progress, they still lag behind human users, especially when tasks require domain-specific procedural knowledge about particular applications, platforms, and multi-step workflows. Humans can bridge this gap by watching video tutorials: we search, skim, and selectively imitate short segments that match our current subgoal. In this paper, we study how to enable computer-use agents to learn from online videos at inference time effectively. We propose a framework that retrieves and filters tutorial videos, converts them into structured demonstration trajectories, and dynamically selects trajectories as in-context guidance during execution. Particularly, using a VLM, we infer UI actions, segment videos into short subsequences of actions, and assign each subsequence a textual objective. At inference time, a two-stage selection mechanism dynamically chooses a single trajectory to add in context at each step, focusing the agent on the most helpful local guidance for its next decision. Experiments on two widely used benchmarks show that our framework consistently outperforms strong base agents and variants that use only textual tutorials or transcripts. Analyses highlight the importance of trajectory segmentation and selection, action filtering, and visual information, suggesting that abundant online videos can be systematically distilled into actionable guidance that improves computer-use agents at inference time. Our code is available at https://github.com/UCSB-NLP-Chang/video_demo.