LPO: Towards Accurate GUI Agent Interaction via Location Preference Optimization

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current GUI agents suffer from inaccurate spatial localization and low task completion rates in supervised fine-tuning (SFT)-based localization tasks, primarily due to weak positional awareness and sparse reward signals. To address these limitations, this paper introduces the first position-aware preference optimization framework for GUI spatial localization. Our method comprises three key components: (1) an information-entropy-driven region focusing mechanism to enhance local positional sensitivity; (2) a physics-based dynamic positional reward function that enables fine-grained localization evaluation via Euclidean distance; and (3) group-wise relative preference optimization (GRPO), which synergistically integrates SFT with preference learning to improve exploration efficiency. Evaluated on both offline benchmarks and real-world online GUI environments, our approach achieves state-of-the-art performance, with significant improvements in click localization accuracy and overall task completion rate.

Technology Category

Application Category

📝 Abstract
The advent of autonomous agents is transforming interactions with Graphical User Interfaces (GUIs) by employing natural language as a powerful intermediary. Despite the predominance of Supervised Fine-Tuning (SFT) methods in current GUI agents for achieving spatial localization, these methods face substantial challenges due to their limited capacity to accurately perceive positional data. Existing strategies, such as reinforcement learning, often fail to assess positional accuracy effectively, thereby restricting their utility. In response, we introduce Location Preference Optimization (LPO), a novel approach that leverages locational data to optimize interaction preferences. LPO uses information entropy to predict interaction positions by focusing on zones rich in information. Besides, it further introduces a dynamic location reward function based on physical distance, reflecting the varying importance of interaction positions. Supported by Group Relative Preference Optimization (GRPO), LPO facilitates an extensive exploration of GUI environments and significantly enhances interaction precision. Comprehensive experiments demonstrate LPO's superior performance, achieving SOTA results across both offline benchmarks and real-world online evaluations. Our code will be made publicly available soon, at https://github.com/AIDC-AI/LPO.
Problem

Research questions and friction points this paper is trying to address.

Improving GUI agent spatial localization accuracy
Optimizing interaction preferences using locational data
Enhancing GUI interaction precision via dynamic rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

LPO optimizes interaction using locational data
Uses entropy to predict high-info zones
Dynamic reward function enhances precision
🔎 Similar Papers
No similar papers found.
J
Jiaqi Tang
The Hong Kong University of Science and Technology
Y
Yu Xia
Alibaba International Digital Commerce
Yi-Feng Wu
Yi-Feng Wu
Alibaba International Digital Commerce
Y
Yuwei Hu
Alibaba International Digital Commerce
Y
Yuhui Chen
Alibaba International Digital Commerce
Qing-Guo Chen
Qing-Guo Chen
alibaba-inc
machine learning
Xiaogang Xu
Xiaogang Xu
CUHK
Large ModelMulti-Modality AIAIGCGenerative PhotographyAI Security
X
Xiangyu Wu
Nanjing University of Science and Technology
H
Hao Lu
The Hong Kong University of Science and Technology
Y
Yanqing Ma
Alibaba International Digital Commerce
Shiyin Lu
Shiyin Lu
Alibaba Group
Multimodal Large Language ModelsOnline LearningBandits
Qifeng Chen
Qifeng Chen
HKUST
Computational PhotographyImage SynthesisGenerative AIAutonomous DrivingEmbodied AI