Towards GUI Agents: Vision-Language Diffusion Models for GUI Grounding

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work explores the application of discrete diffusion vision-language models (DVLMs) to graphical user interface (GUI) grounding tasks as an alternative to conventional autoregressive approaches. We adapt LLaDA-V into a single-turn action and bounding box prediction model, formulating GUI grounding as a multimodal conditional text generation problem. To better capture geometric structure, we propose a hybrid scheduling strategy that combines linear and deterministic masking. Experiments across four cross-platform GUI datasets demonstrate that our method improves accuracy by up to 6.1% over purely linear masking baselines. Furthermore, with expanded training data, the model achieves an average accuracy gain of 20 percentage points and reduces inference latency by approximately 1.3 seconds, confirming the effectiveness and potential of discrete diffusion models for GUI agent tasks.
📝 Abstract
Autoregressive (AR) vision-language models (VLMs) have long dominated multimodal understanding, reasoning, and graphical user interface (GUI) grounding. Recently, discrete diffusion vision-language models (DVLMs) have shown strong performance in multimodal reasoning, offering bidirectional attention, parallel token generation, and iterative refinement. However, their potential for GUI grounding remains unexplored. In this work, we evaluate whether discrete DVLMs can serve as a viable alternative to AR models for GUI grounding. We adapt LLaDA-V for single-turn action and bounding-box prediction, framing the task as text generation from multimodal input. To better capture the hierarchical structure of bounding-box geometry, we propose a hybrid masking schedule that combines linear and deterministic masking, improving grounding accuracy by up to 6.1 points in Step Success Rate (SSR) over the GUI-adapted LLaDA-V trained with linear masking. Evaluations on four datasets spanning web, desktop, and mobile interfaces show that the adapted diffusion model with hybrid masking consistently outperforms the linear-masked variant and performs competitively with autoregressive counterparts despite limited pretraining. Systematic ablations reveal that increasing diffusion steps, generation length, and block length improves accuracy but also increases latency, with accuracy plateauing beyond a certain number of diffusion steps. Expanding the training data with diverse GUI domains further reduces latency by about 1.3 seconds and improves grounding accuracy by an average of 20 points across benchmarks. These results demonstrate that discrete DVLMs are a promising modeling framework for GUI grounding and represent an important step toward diffusion-based GUI agents.
Problem

Research questions and friction points this paper is trying to address.

GUI grounding
discrete diffusion models
vision-language models
multimodal reasoning
bounding-box prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

discrete diffusion vision-language models
GUI grounding
hybrid masking schedule
bounding-box prediction
multimodal reasoning
🔎 Similar Papers
No similar papers found.