AdaZoom-GUI: Adaptive Zoom-based GUI Grounding with Instruction Refinement

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of grounding visual language models in high-resolution GUI screenshots, where fine-grained interface elements and ambiguous user instructions hinder accurate localization. To tackle this, the authors propose an adaptive scaling framework that integrates an instruction refinement module to rewrite vague commands into precise descriptions and employs a conditional scaling strategy to perform secondary inference on initially predicted small elements, balancing accuracy and efficiency. A high-quality GUI grounding dataset is curated, and Group Relative Policy Optimization (GRPO) is leveraged to jointly optimize click coordinates and bounding box predictions. Evaluated on public benchmarks, the method achieves state-of-the-art performance among comparable or larger models, significantly enhancing high-resolution GUI understanding and agent deployment effectiveness.

Technology Category

Application Category

📝 Abstract
GUI grounding is a critical capability for vision-language models (VLMs) that enables automated interaction with graphical user interfaces by locating target elements from natural language instructions. However, grounding on GUI screenshots remains challenging due to high-resolution images, small UI elements, and ambiguous user instructions. In this work, we propose AdaZoom-GUI, an adaptive zoom-based GUI grounding framework that improves both localization accuracy and instruction understanding. Our approach introduces an instruction refinement module that rewrites natural language commands into explicit and detailed descriptions, allowing the grounding model to focus on precise element localization. In addition, we design a conditional zoom-in strategy that selectively performs a second-stage inference on predicted small elements, improving localization accuracy while avoiding unnecessary computation and context loss on simpler cases. To support this framework, we construct a high-quality GUI grounding dataset and train the grounding model using Group Relative Policy Optimization (GRPO), enabling the model to predict both click coordinates and element bounding boxes. Experiments on public benchmarks demonstrate that our method achieves state-of-the-art performance among models with comparable or even larger parameter sizes, highlighting its effectiveness for high-resolution GUI understanding and practical GUI agent deployment.
Problem

Research questions and friction points this paper is trying to address.

GUI grounding
vision-language models
high-resolution images
ambiguous instructions
small UI elements
Innovation

Methods, ideas, or system contributions that make the work stand out.

instruction refinement
adaptive zoom
conditional zoom-in
GUI grounding
GRPO
🔎 Similar Papers
No similar papers found.
S
Siqi Pei
Lenovo Research, Beijing, China
Liang Tang
Liang Tang
Google
Reinforcement LearningRecommender SystemPersonalizationComputational AdvertisingAds Quality
T
Tiaonan Duan
Lenovo Research, Beijing, China
L
Long Chen
Lenovo Research, Beijing, China
Shuxian Li
Shuxian Li
USDA-ARS
Plant Pathology
Kaer Huang
Kaer Huang
Lenovo Research
Reinforcement LearningLLM/MLLMGUI Agent
Y
Yanzhe Jing
Lenovo Research, Beijing, China
Yiqiang Yan
Yiqiang Yan
Lenovo
Bo Zhang
Bo Zhang
Tsinghua University
protein designbioinformatics
C
Chenghao Jiang
Tsinghua University, Beijing, China
Borui Zhang
Borui Zhang
Ph.D. student, Tsinghua University
Computer VisionMachine LearningMetric LearningExplainable AI
J
Jiwen Lu
Tsinghua University, Beijing, China