🤖 AI Summary
This work addresses the limitations of current computer-using agents, which rely on static imitation learning and struggle to model the complex causal dynamics inherent in long-horizon tasks. To overcome this, the authors propose a self-evolving agent framework that establishes a closed-loop cycle of synthetic data generation, policy optimization, and error analysis. This framework leverages a verifiable synthetic task engine and an iterative learning mechanism bounded by the agent’s evolving capabilities. Built upon an asynchronous sandbox infrastructure enabling large-scale rollouts, the system supports efficient policy iteration and self-correction. Evaluated on the OSWorld benchmark, the method achieves a success rate of 56.7%, outperforming both the state-of-the-art open-source model OpenCUA-72B (45.0%) and the closed-source model UI-TARS-2 (53.1%), thereby significantly surpassing the performance ceiling of static imitation learning.
📝 Abstract
The development of native computer-use agents (CUA) represents a significant leap in multimodal AI. However, their potential is currently bottlenecked by the constraints of static data scaling. Existing paradigms relying primarily on passive imitation of static datasets struggle to capture the intricate causal dynamics inherent in long-horizon computer tasks. In this work, we introduce EvoCUA, a native computer use agentic model. Unlike static imitation, EvoCUA integrates data generation and policy optimization into a self-sustaining evolutionary cycle. To mitigate data scarcity, we develop a verifiable synthesis engine that autonomously generates diverse tasks coupled with executable validators. To enable large-scale experience acquisition, we design a scalable infrastructure orchestrating tens of thousands of asynchronous sandbox rollouts. Building on these massive trajectories, we propose an iterative evolving learning strategy to efficiently internalize this experience. This mechanism dynamically regulates policy updates by identifying capability boundaries -- reinforcing successful routines while transforming failure trajectories into rich supervision through error analysis and self-correction. Empirical evaluations on the OSWorld benchmark demonstrate that EvoCUA achieves a success rate of 56.7%, establishing a new open-source state-of-the-art. Notably, EvoCUA significantly outperforms the previous best open-source model, OpenCUA-72B (45.0%), and surpasses leading closed-weights models such as UI-TARS-2 (53.1%). Crucially, our results underscore the generalizability of this approach: the evolving paradigm driven by learning from experience yields consistent performance gains across foundation models of varying scales, establishing a robust and scalable path for advancing native agent capabilities.