🤖 AI Summary
To address the scarcity of high-quality datasets for instruction–UI element alignment in desktop environments, this work introduces GroundCUA—the first large-scale desktop interaction grounding dataset—comprising 56K screenshots from 87 real-world applications and over 3.56 million high-precision human expert annotations. We propose GroundNext, a novel model that integrates supervised fine-tuning with reinforcement learning and employs o3 as its planner to achieve fine-grained UI element localization within a multi-scale architecture. Our approach drastically reduces data dependency: using only one-tenth of the training data, it surpasses prior state-of-the-art methods. GroundNext achieves consistent top performance across five benchmarks and matches or exceeds larger models on OSWorld tasks, empirically validating the efficacy of high-quality demonstration-driven learning for enhancing generalization in desktop grounding.
📝 Abstract
Building reliable computer-use agents requires grounding: accurately connecting natural language instructions to the correct on-screen elements. While large datasets exist for web and mobile interactions, high-quality resources for desktop environments are limited. To address this gap, we introduce GroundCUA, a large-scale desktop grounding dataset built from expert human demonstrations. It covers 87 applications across 12 categories and includes 56K screenshots, with every on-screen element carefully annotated for a total of over 3.56M human-verified annotations. From these demonstrations, we generate diverse instructions that capture a wide range of real-world tasks, providing high-quality data for model training. Using GroundCUA, we develop the GroundNext family of models that map instructions to their target UI elements. At both 3B and 7B scales, GroundNext achieves state-of-the-art results across five benchmarks using supervised fine-tuning, while requiring less than one-tenth the training data of prior work. Reinforcement learning post-training further improves performance, and when evaluated in an agentic setting on the OSWorld benchmark using o3 as planner, GroundNext attains comparable or superior results to models trained with substantially more data,. These results demonstrate the critical role of high-quality, expert-driven datasets in advancing general-purpose computer-use agents.