🤖 AI Summary
GUI screenshots exhibit high visual complexity and dense element layouts, severely limiting the grounding capability of vision-language models (VLMs) in web action localization. To address this, we propose RegionFocus—a lightweight, test-time visual scaling method tailored for GUI agents—that dynamically focuses on salient interface regions to suppress background interference, and introduces the novel “image-as-map” mechanism to visualize operational landmarks in real time, enhancing decision interpretability. RegionFocus requires no fine-tuning and seamlessly integrates with open-source VLMs (e.g., UI-TARS, Qwen2.5-VL). On Screenspot-pro and WebVoyager benchmarks, it achieves relative improvements of 28.3% and 24.1%, respectively; Qwen2.5-VL-72B augmented with RegionFocus attains a state-of-the-art grounding accuracy of 61.6%. This work presents the first systematic empirical validation that test-time visual scaling significantly enhances GUI agent grounding performance.
📝 Abstract
We introduce RegionFocus, a visual test-time scaling approach for Vision Language Model Agents. Understanding webpages is challenging due to the visual complexity of GUI images and the large number of interface elements, making accurate action selection difficult. Our approach dynamically zooms in on relevant regions, reducing background clutter and improving grounding accuracy. To support this process, we propose an image-as-map mechanism that visualizes key landmarks at each step, providing a transparent action record and enables the agent to effectively choose among action candidates. Even with a simple region selection strategy, we observe significant performance gains of 28+% on Screenspot-pro and 24+% on WebVoyager benchmarks on top of two state-of-the-art open vision language model agents, UI-TARS and Qwen2.5-VL, highlighting the effectiveness of visual test-time scaling in interactive settings. We achieve a new state-of-the-art grounding performance of 61.6% on the ScreenSpot-Pro benchmark by applying RegionFocus to a Qwen2.5-VL-72B model. Our code will be released publicly at https://github.com/tiangeluo/RegionFocus.