🤖 AI Summary
High-resolution UI screenshots introduce substantial redundant visual tokens in vision-language models, leading to excessive computational overhead and diluted attention, which degrades both efficiency and accuracy in UI grounding. To address this, this work proposes FocusUI, a framework that integrates instruction-aware semantics with UI structural priors to score and select critical visual tokens. It further introduces PosPad, a strategy that preserves positional continuity while compressing the sequence of discarded tokens. FocusUI achieves the first efficient token selection method tailored for UI grounding, outperforming existing GUI-specific models across four benchmarks—including ScreenSpot-Pro—by maintaining performance within a 3.2% drop while using only 30% of the original tokens, accelerating inference by 1.44×, and reducing peak GPU memory consumption by 17%.
📝 Abstract
Vision-Language Models (VLMs) have shown remarkable performance in User Interface (UI) grounding tasks, driven by their ability to process increasingly high-resolution screenshots. However, screenshots are tokenized into thousands of visual tokens (e.g., about 4700 for 2K resolution), incurring significant computational overhead and diluting attention. In contrast, humans typically focus on regions of interest when interacting with UI. In this work, we pioneer the task of efficient UI grounding. Guided by practical analysis of the task's characteristics and challenges, we propose FocusUI, an efficient UI grounding framework that selects patches most relevant to the instruction while preserving positional continuity for precise grounding. FocusUI addresses two key challenges: (1) Eliminating redundant tokens in visual encoding. We construct patch-level supervision by fusing an instruction-conditioned score with a rule-based UI-graph score that down-weights large homogeneous regions to select distinct and instruction-relevant visual tokens. (2) Preserving positional continuity during visual token selection. We find that general visual token pruning methods suffer from severe accuracy degradation on UI grounding tasks due to broken positional information. We introduce a novel PosPad strategy, which compresses each contiguous sequence of dropped visual tokens into a single special marker placed at the sequence's last index to preserve positional continuity. Comprehensive experiments on four grounding benchmarks demonstrate that FocusUI surpasses GUI-specific baselines. On the ScreenSpot-Pro benchmark, FocusUI-7B achieves a performance improvement of 3.7% over GUI-Actor-7B. Even with only 30% visual token retention, FocusUI-7B drops by only 3.2% while achieving up to 1.44x faster inference and 17% lower peak GPU memory.