GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding

📅 2025-11-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses GUI grounding—the task of precisely localizing on-screen interactive regions based on natural language instructions. Unlike prevailing multimodal large language models (MLLMs) that formulate grounding as coordinate regression, we propose GUI-AIMA, the first “coordinate-agnostic” attention alignment framework. It leverages intrinsic cross-modal attention in MLLMs to dynamically align textual instructions with patch-wise visual features; introduces context-aware anchor points for adaptive grounding signal generation; and incorporates a plug-and-play zoom-in localization mechanism. By simplifying multi-head attention aggregation in the query–vision attention matrix and applying supervised fine-tuning, we significantly enhance the model’s native visual localization capability. Trained on only 85K screenshots, GUI-AIMA-3B achieves state-of-the-art accuracy of 58.6% on ScreenSpot-Pro and 62.2% on OSWorld-G—outperforming all existing 3B-scale models.

Technology Category

Application Category

📝 Abstract
Graphical user interface (GUI) grounding is a key function of computer-use agents, which maps natural-language instructions to actionable screen regions. Existing approaches based on Multimodal Large Language Models (MLLMs) typically formulate it as a text-based coordinate generation task, yet directly generating precise coordinates from visual inputs remains challenging and computationally intensive. An intuitive way to implement GUI grounding is to first select visual patches relevant to the instructions and then determine the precise click location within those patches. Based on the observations that general MLLMs have some native grounding capability, nested within their attentions, we propose GUI-AIMA, an attention-based and coordinate-free supervised fine-tuning framework for efficient GUI grounding. GUI-AIMA aligns the intrinsic multimodal attention of MLLMs with patch-wise grounding signals. These signals are calculated adaptively for diverse user instructions by multi-head aggregation on simplified query-visual attention matrices. Besides, its coordinate-free manner can easily integrate a plug-and-play zoom-in stage. GUI-AIMA-3B was trained with only 85k screenshots, demonstrating exceptional data efficiency and verifying that light training can trigger the native grounding capability of MLLMs. It achieves state-of-the-art performance among 3B models, attaining an average accuracy of 58.6% on ScreenSpot-Pro and 62.2% on OSWorld-G. Project page: https://github.com/sjz5202/GUI-AIMA
Problem

Research questions and friction points this paper is trying to address.

Aligns multimodal attention with context anchors for GUI grounding
Maps natural language instructions to actionable screen regions
Enhances GUI grounding without generating precise coordinates directly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns multimodal attention with patch-wise grounding signals
Uses multi-head aggregation on query-visual attention matrices
Integrates plug-and-play zoom-in stage without coordinates
🔎 Similar Papers
No similar papers found.