🤖 AI Summary
GUI agents face two key challenges in cross-platform autonomous task execution: (1) task planning ambiguity—arising from multiple feasible action sequences—and (2) insufficient visual localization and interaction precision on high-resolution interfaces. This paper proposes a test-time expansion framework that jointly addresses these issues via parallel candidate action sampling, discriminative action selection, reinforcement learning–driven visual grounding optimization, and large language model–assisted action proposal. Its core innovation lies in dynamically expanding the planning space and introducing an explicit decision-making mechanism at test time—enhancing both decision robustness and pixel-level operational accuracy without compromising efficiency. Evaluated on Screenspot-Pro, Screenspot-V2, and OSWorld-G benchmarks, the method achieves state-of-the-art performance: 50.1%, 92.4%, and 67.7% accuracy on GTA1-7B across the three benchmarks, respectively, and a 45.2% task success rate on OSWorld.
📝 Abstract
Graphical user interface (GUI) agents autonomously operate across platforms (e.g., Linux) to complete tasks by interacting with visual elements. Specifically, a user instruction is decomposed into a sequence of action proposals, each corresponding to an interaction with the GUI. After each action, the agent observes the updated GUI environment to plan the next step. However, two main challenges arise: i) resolving ambiguity in task planning (i.e., the action proposal sequence), where selecting an appropriate plan is non-trivial, as many valid ones may exist; ii) accurately grounding actions in complex and high-resolution interfaces, i.e., precisely interacting with visual targets.
This paper investigates the two aforementioned challenges with our GUI Test-time Scaling Agent, namely GTA1. First, to select the most appropriate action proposal, we introduce a test-time scaling method. At each step, we sample multiple candidate action proposals and leverage a judge model to evaluate and select the most suitable one. It trades off computation for better decision quality by concurrent sampling, shortening task execution steps, and improving overall performance. Second, we propose a model that achieves improved accuracy when grounding the selected action proposal to its corresponding visual elements. Our key insight is that reinforcement learning (RL) facilitates visual grounding through inherent objective alignments, rewarding successful clicks on interface elements.
Experimentally, our method establishes state-of-the-art performance across diverse benchmarks. For example, GTA1-7B achieves 50.1%, 92.4%, and 67.7% accuracies on Screenspot-Pro, Screenspot-V2, and OSWorld-G, respectively. When paired with a planner applying our test-time scaling strategy, it exhibits state-of-the-art agentic performance (e.g., 45.2% task success rate on OSWorld). We open-source our code and models here.