๐ค AI Summary
Current GUI agents lack effective mechanisms for evaluating action quality, often leading to task failure due to irreversible errors. This work proposes IntentScoreโa novel action-scoring model that integrates planning intent into the action encoder to distinguish between semantically similar but goal-divergent operations. Trained on 398K cross-operating-system offline interaction trajectories using contrastive learning and margin ranking loss, IntentScore learns generalizable reward signals from heterogeneous behavioral data. In held-out evaluations, it achieves a pairwise discrimination accuracy of 97.5%. When deployed as a re-ranker in the unseen environment OSWorld, it improves task success rate by 6.9 percentage points.
๐ Abstract
Computer-Use Agents (CUAs) leverage large language models to execute GUI operations on desktop environments, yet they generate actions without evaluating action quality, leading to irreversible errors that cascade through subsequent steps. We propose IntentScore, a plan-aware reward model that learns to score candidate actions from 398K offline GUI interaction steps spanning three operating systems. IntentScore trains with two complementary objectives: contrastive alignment for state-action relevance and margin ranking for action correctness. Architecturally, it embeds each candidate's planning intent in the action encoder, enabling discrimination between candidates with similar actions but different rationales. IntentScore achieves 97.5% pairwise discrimination accuracy on held-out evaluation. Deployed as a re-ranker for Agent S3 on OSWorld, an environment entirely unseen during training, IntentScore improves task success rate by 6.9 points, demonstrating that reward estimation learned from heterogeneous offline trajectories generalizes to unseen agents and task distributions.