Zoom to Essence: Trainless GUI Grounding by Inferring upon Interface Elements

πŸ“… 2026-03-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes ZoomUI, a training-free approach for GUI grounding that circumvents the high cost and data-quality limitations of conventional methods relying on extensive labeled datasets to fine-tune multimodal large language models. ZoomUI leverages inference-time scaling to progressively anchor natural language instructions to interface elements through a synergistic combination of latent reasoning optimization, internal attention mechanisms, and iterative region zooming. This enables a stepwise visual focus refinement process without any model training. Evaluated across multiple benchmarks, ZoomUI achieves performance on par with or superior to state-of-the-art methods while drastically reducing dependence on annotated data, thereby establishing the first training-free GUI interaction paradigm grounded in inference-time scaling.

Technology Category

Application Category

πŸ“ Abstract
Multimodal Large Language Model (MLLM)-based Graphical User Interface (GUI) agents develop rapidly, with visual grounding that maps natural language instructions to target UI elements serving as the core capability. Existing GUI agents typically fine-tune MLLM on massive datasets to handle challenges in understanding instructions and UI interfaces, which not only incurs high data annotation costs but also makes performance dependent on data quality and distribution. To avoid such cumbersome yet ineffective training, we notice that complex UI interfaces can be decomposed into basic visual elements directly understandable by common MLLMs. Consequently, we propose ZoomUI that leverages inference scaling to guide common MLLMs in progressively anchor instruction elements to increasingly detailed interface elements. Specifically, ZoomUI first optimizes the latent thinking to transform original instruction into element visual features description, and subsequently leverages internal attention to iteratively zoom in target element interface region. Evaluations on extensive benchmarks demonstrate that ZoomUI reaches or even surpasses SOTA baselines.
Problem

Research questions and friction points this paper is trying to address.

GUI grounding
Multimodal Large Language Model
visual grounding
data annotation cost
instruction-to-UI mapping
Innovation

Methods, ideas, or system contributions that make the work stand out.

trainless GUI grounding
inference scaling
multimodal LLM
visual grounding
iterative attention
πŸ”Ž Similar Papers
No similar papers found.