🤖 AI Summary
Current GUI agents suffer from inaccurate spatial localization and low task completion rates in supervised fine-tuning (SFT)-based localization tasks, primarily due to weak positional awareness and sparse reward signals. To address these limitations, this paper introduces the first position-aware preference optimization framework for GUI spatial localization. Our method comprises three key components: (1) an information-entropy-driven region focusing mechanism to enhance local positional sensitivity; (2) a physics-based dynamic positional reward function that enables fine-grained localization evaluation via Euclidean distance; and (3) group-wise relative preference optimization (GRPO), which synergistically integrates SFT with preference learning to improve exploration efficiency. Evaluated on both offline benchmarks and real-world online GUI environments, our approach achieves state-of-the-art performance, with significant improvements in click localization accuracy and overall task completion rate.
📝 Abstract
The advent of autonomous agents is transforming interactions with Graphical User Interfaces (GUIs) by employing natural language as a powerful intermediary. Despite the predominance of Supervised Fine-Tuning (SFT) methods in current GUI agents for achieving spatial localization, these methods face substantial challenges due to their limited capacity to accurately perceive positional data. Existing strategies, such as reinforcement learning, often fail to assess positional accuracy effectively, thereby restricting their utility. In response, we introduce Location Preference Optimization (LPO), a novel approach that leverages locational data to optimize interaction preferences. LPO uses information entropy to predict interaction positions by focusing on zones rich in information. Besides, it further introduces a dynamic location reward function based on physical distance, reflecting the varying importance of interaction positions. Supported by Group Relative Preference Optimization (GRPO), LPO facilitates an extensive exploration of GUI environments and significantly enhances interaction precision. Comprehensive experiments demonstrate LPO's superior performance, achieving SOTA results across both offline benchmarks and real-world online evaluations. Our code will be made publicly available soon, at https://github.com/AIDC-AI/LPO.