Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 14
Influential: 1
📄 PDF
🤖 AI Summary
GUI automation faces challenges including heavy reliance on textual elements, tight platform coupling, and limited reasoning capabilities. This paper introduces the first end-to-end vision-only GUI agent—eliminating OCR, DOM parsing, and closed-source models—to directly interpret interface pixels, comprehend natural-language instructions, and plan multi-step interactions. Methodologically, we propose: (1) joint modeling of explicit planning and visual grounding; (2) the first large-scale multimodal GUI trajectory dataset; (3) a pure-vision Transformer architecture with unified image-text-action embeddings, two-stage curriculum training (grounding → reasoning), and a pixel-level action space. Experiments demonstrate state-of-the-art performance on both offline benchmarks and real-world online applications, with zero-shot cross-platform generalization across Windows, macOS, and Android. All code, models, and data are publicly released for full reproducibility.

Technology Category

Application Category

📝 Abstract
Graphical User Interfaces (GUIs) are critical to human-computer interaction, yet automating GUI tasks remains challenging due to the complexity and variability of visual environments. Existing approaches often rely on textual representations of GUIs, which introduce limitations in generalization, efficiency, and scalability. In this paper, we introduce Aguvis, a unified pure vision-based framework for autonomous GUI agents that operates across various platforms. Our approach leverages image-based observations, and grounding instructions in natural language to visual elements, and employs a consistent action space to ensure cross-platform generalization. To address the limitations of previous work, we integrate explicit planning and reasoning within the model, enhancing its ability to autonomously navigate and interact with complex digital environments. We construct a large-scale dataset of GUI agent trajectories, incorporating multimodal reasoning and grounding, and employ a two-stage training pipeline that first focuses on general GUI grounding, followed by planning and reasoning. Through comprehensive experiments, we demonstrate that Aguvis surpasses previous state-of-the-art methods in both offline and real-world online scenarios, achieving, to our knowledge, the first fully autonomous pure vision GUI agent capable of performing tasks independently without collaboration with external closed-source models. We open-sourced all datasets, models, and training recipes to facilitate future research at https://aguvis-project.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Automating GUI tasks without textual dependencies
Standardizing cross-platform GUI interactions via vision
Enhancing reasoning for autonomous GUI agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-based framework for GUI interaction
Standardizes cross-platform interactions via images
Two-stage training pipeline for grounding and reasoning
🔎 Similar Papers
No similar papers found.