GUIDE: Resolving Domain Bias in GUI Agents through Real-Time Web Video Retrieval and Plug-and-Play Annotation

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited performance of GUI agents stemming from their lack of domain-specific software operation experience, which hinders accurate understanding of interface layouts and task workflows. To overcome this, the authors propose a training-free, plug-and-play framework that leverages a caption-driven Video-RAG mechanism with three-stage retrieval to dynamically extract operational knowledge from online tutorial videos. By integrating caption analysis, UI element detection, and vision-language model reasoning, the framework constructs an end-to-end knowledge injection pipeline that automatically annotates actions using an inverse dynamics model, thereby mitigating domain bias. Evaluated on the OSWorld benchmark, the method serves as a universal plugin that boosts average performance by over 5% for both single-agent and multi-agent systems without altering model architecture or parameters, while also significantly reducing the number of execution steps.
📝 Abstract
Large vision-language models have endowed GUI agents with strong general capabilities for interface understanding and interaction. However, due to insufficient exposure to domain-specific software operation data during training, these agents exhibit significant domain bias - they lack familiarity with the specific operation workflows (planning) and UI element layouts (grounding) of particular applications, limiting their real-world task performance. In this paper, we present GUIDE (GUI Unbiasing via Instructional-Video Driven Expertise), a training-free, plug-and-play framework that resolves GUI agent domain bias by autonomously acquiring domain-specific expertise from web tutorial videos through a retrieval-augmented automated annotation pipeline. GUIDE introduces two key innovations. First, a subtitle-driven Video-RAG pipeline unlocks video semantics through subtitle analysis, performing progressive three-stage retrieval - domain classification, topic extraction, and relevance matching - to identify task-relevant tutorial videos. Second, a fully automated annotation pipeline built on an inverse dynamics paradigm feeds consecutive keyframes enhanced with UI element detection into VLMs, inferring the required planning and grounding knowledge that are injected into the agent's corresponding modules to address both manifestations of domain bias. Extensive experiments on OSWorld demonstrate GUIDE's generality as a plug-and-play component for both multi-agent systems and single-model agents. It consistently yields over 5% improvements and reduces execution steps - without modifying any model parameters or architecture - validating GUIDE as an architecture-agnostic enhancement to bridge GUI agent domain bias.
Problem

Research questions and friction points this paper is trying to address.

domain bias
GUI agents
operation workflows
UI element grounding
software-specific expertise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video-RAG
domain bias mitigation
plug-and-play framework
inverse dynamics annotation
GUI agents
🔎 Similar Papers
No similar papers found.
R
Rui Xie
Shanghai Jiao Tong University; State Key Laboratory for General Artificial Intelligence, BIGAI
Z
Zhi Gao
State Key Laboratory for General Artificial Intelligence, BIGAI; Beijing Institute of Technology
Chenrui Shi
Chenrui Shi
Beijing Institute of Technology
anomaly detection
Z
Zirui Shang
State Key Laboratory for General Artificial Intelligence, BIGAI; Beijing Institute of Technology
Lu Chen
Lu Chen
School of Computer Science, Shanghai Jiao Tong University
Large Language ModelsDialogue SystemsAI for Science
Q
Qing Li
State Key Laboratory for General Artificial Intelligence, BIGAI; Beijing Institute of Technology