๐ค AI Summary
To address the scarcity of high-quality trajectory data and the poor scalability of behavior cloning in training GUI-operating AI agents, this paper proposes a lightweight, efficient training paradigm for desktop environments. Methodologically: (1) we construct a large-scale instruction-driven framework for collecting suboptimal trajectories; (2) we introduce a novel step-level automated verification pipeline leveraging GPT-4oโs multimodal reasoning to perform binary correctness assessment of each action conditioned on pre- and post-action screen states; (3) we integrate prospect theory to model action-value biases, enabling joint optimization of positive and negative samples; (4) we design a 7B-parameter vision-language model (VLM) fine-tuning strategy tailored for GUI agents, incorporating frame-difference analysis to enhance state perception efficiency. Evaluated on the realistic WinAgentArena desktop benchmark, our approach achieves state-of-the-art performance with significantly reduced training costโmarking the first successful deployment of a 7B-scale VLM for efficient end-to-end GUI control in complex tasks.
๐ Abstract
Developing AI agents to autonomously manipulate graphical user interfaces is a long challenging task. Recent advances in data scaling law inspire us to train computer-use agents with a scaled instruction set, yet using behavior cloning to train agents still requires immense high-quality trajectories. To meet the scalability need, we designed STEVE, a step verification pipeline for computer-use agent training. First, we establish a large instruction set for computer-use agents and collect trajectory data with some suboptimal agents. GPT-4o is used to verify the correctness of each step in the trajectories based on the screens before and after the action execution, assigning each step with a binary label. Last, we adopt the Kahneman and Tversky Optimization to optimize the agent from the binary stepwise labels. Extensive experiments manifest that our agent outperforms supervised finetuning by leveraging both positive and negative actions within a trajectory. Also, STEVE enables us to train a 7B vision-language model as a computer-use agent, achieving leading performance in the challenging live desktop environment WinAgentArena with great efficiency at a reduced cost. Code and data: https://github.com/FanbinLu/STEVE.