🤖 AI Summary
This work addresses the challenges of generality and efficiency in cross-platform (desktop, mobile, browser, etc.) GUI agents for automation, tool invocation, and knowledge comprehension. We propose GUI-Owl-1.5, a native multi-resolution GUI agent model spanning 2B to 235B parameters, designed for cloud-edge collaboration and real-time interaction. Our approach innovatively integrates a hybrid data flywheel mechanism, a unified chain-of-thought synthesis reasoning framework, and a multi-platform-oriented MRPO reinforcement learning algorithm, complemented by a data generation strategy combining simulation and cloud-based sandboxing to enhance data quality, reasoning capability, and training efficiency. The model achieves state-of-the-art performance across more than 20 open-source GUI benchmarks, including OSWorld (56.5), AndroidWorld (71.6), WebArena (48.4), ScreenSpotPro (80.3), and GUI-Knowledge Bench (75.5).
📝 Abstract
The paper introduces GUI-Owl-1.5, the latest native GUI agent model that features instruct/thinking variants in multiple sizes (2B/4B/8B/32B/235B) and supports a range of platforms (desktop, mobile, browser, and more) to enable cloud-edge collaboration and real-time interaction. GUI-Owl-1.5 achieves state-of-the-art results on more than 20+ GUI benchmarks on open-source models: (1) on GUI automation tasks, it obtains 56.5 on OSWorld, 71.6 on AndroidWorld, and 48.4 on WebArena; (2) on grounding tasks, it obtains 80.3 on ScreenSpotPro; (3) on tool-calling tasks, it obtains 47.6 on OSWorld-MCP, and 46.8 on MobileWorld; (4) on memory and knowledge tasks, it obtains 75.5 on GUI-Knowledge Bench. GUI-Owl-1.5 incorporates several key innovations: (1) Hybird Data Flywheel: we construct the data pipeline for UI understanding and trajectory generation based on a combination of simulated environments and cloud-based sandbox environments, in order to improve the efficiency and quality of data collection. (2) Unified Enhancement of Agent Capabilities: we use a unified thought-synthesis pipeline to enhance the model's reasoning capabilities, while placing particular emphasis on improving key agent abilities, including Tool/MCP use, memory and multi-agent adaptation; (3) Multi-platform Environment RL Scaling: We propose a new environment RL algorithm, MRPO, to address the challenges of multi-platform conflicts and the low training efficiency of long-horizon tasks. The GUI-Owl-1.5 models are open-sourced, and an online cloud-sandbox demo is available at https://github.com/X-PLUG/MobileAgent.