π€ AI Summary
Traditional GUI element localization methods overlook spatial interaction uncertainty and visual-semantic hierarchy, leading to severe background interference and ambiguous center-edge discrimination, thereby degrading click accuracy. To address this, we propose a robust visual grounding framework for GUI elements. First, we design a background-suppression attention mechanism to mitigate distractions from non-target regions. Second, we construct size-adaptive 2D Gaussian heatmaps grounded in Fittsβ Law, enabling fine-grained supervision with incrementally increasing weights toward the element center. Third, we integrate visual-semantic hierarchical learning to enhance representation capability. Our method achieves 92.3% and 50.5% accuracy on ScreenSpot-v2 and ScreenSpot-Pro, respectively. Ablation studies confirm the effectiveness of each component, and comprehensive comparisons demonstrate substantial improvements over state-of-the-art baselines, along with strong generalization across diverse GUI layouts and domains.
π Abstract
Precise localization of GUI elements is crucial for the development of GUI agents. Traditional methods rely on bounding box or center-point regression, neglecting spatial interaction uncertainty and visual-semantic hierarchies. Recent methods incorporate attention mechanisms but still face two key issues: (1) ignoring processing background regions causes attention drift from the desired area, and (2) uniform labeling fails to distinguish between center and edges of the target UI element, leading to click imprecision. Inspired by how humans visually process and interact with GUI elements, we propose the Valley-to-Peak (V2P) method to address these issues. To mitigate background distractions, V2P introduces a suppression attention mechanism that minimizes the model's focus on irrelevant regions to highlight the intended region. For the issue of center-edge distinction, V2P applies a Fitts' Law-inspired approach by modeling GUI interactions as 2D Gaussian heatmaps where the weight gradually decreases from the center towards the edges. The weight distribution follows a Gaussian function, with the variance determined by the target's size. Consequently, V2P effectively isolates the target area and teaches the model to concentrate on the most essential point of the UI element. The model trained by V2P achieves the performance with 92.3% and 50.5% on two benchmarks ScreenSpot-v2 and ScreenSpot-Pro. Ablations further confirm each component's contribution, highlighting V2P's generalizability for precise GUI grounding tasks.