🤖 AI Summary
GUI automation remains challenged by visual complexity, environmental dynamics, and multi-step reasoning. Existing vision-language model (VLM) approaches suffer from low-resolution input, domain shift, and weak sequential decision-making capabilities. This paper proposes a multimodal foundation model–based GUI agent featuring a three-stage collaborative training pipeline—pretraining, domain-specific fine-tuning, and reinforcement-based refinement—alongside a validation-driven error recovery mechanism. We further construct a high-fidelity simulation environment to generate high-quality interactive data. Our approach innovatively integrates multimodal pretraining, offline and online reinforcement learning, cross-domain transfer, and interpretable action modeling. On the Mind2Web and OSWorld benchmarks, our method achieves task success rates of 82.4% and 76.9%, respectively—significantly surpassing state-of-the-art methods—and marks the first unified framework for robust, recoverable, and multi-step GUI automation.
📝 Abstract
Graphical user interfaces (GUIs) are the primary medium for human-computer interaction, yet automating GUI interactions remains challenging due to the complexity of visual elements, dynamic environments, and the need for multi-step reasoning. Existing methods based on vision-language models (VLMs) often suffer from limited resolution, domain mismatch, and insufficient sequential decisionmaking capability. To address these issues, we propose Mano, a robust GUI agent built upon a multi-modal foundation model pre-trained on extensive web and computer system data. Our approach integrates a novel simulated environment for high-fidelity data generation, a three-stage training pipeline (supervised fine-tuning, offline reinforcement learning, and online reinforcement learning), and a verification module for error recovery. Mano demonstrates state-of-the-art performance on multiple GUI benchmarks, including Mind2Web and OSWorld, achieving significant improvements in success rate and operational accuracy. Our work provides new insights into the effective integration of reinforcement learning with VLMs for practical GUI agent deployment, highlighting the importance of domain-specific data, iterative training, and holistic reward design.