MobileGUI-RL: Advancing Mobile GUI Agent through Reinforcement Learning in Online Environment

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-based GUI agents rely on offline trajectory training, resulting in poor generalization, overfitting to fixed UI templates, and limited adaptability to novel environments. This paper introduces the first online reinforcement learning framework for mobile GUI agents, eliminating dependence on pre-collected datasets. Our approach comprises three key components: (1) a self-exploration-driven task curriculum generation mechanism enabling scalable continual learning; (2) an enhanced GRPO algorithm incorporating trajectory-aware advantage estimation and a composite reward function that jointly optimizes task success rate and execution efficiency; and (3) end-to-end visual GUI understanding integrated with real-time environmental interaction. Evaluated on three online mobile agent benchmarks, our method achieves significant improvements in task completion rate and cross-application generalization, demonstrating superior robustness and practicality.

Technology Category

Application Category

📝 Abstract
Recently, there has been a surge of vision-based GUI agents designed to automate everyday mobile and web tasks. These agents interpret raw GUI screenshots and autonomously decide where to click, scroll, or type, which bypasses handcrafted rules and app-specific APIs. However, most existing methods trained GUI agent in the offline environment using pre-collected trajectories. This approach limits scalability, causes overfitting to specific UI templates, and leads to brittle policies when faced with unseen environment. We present MobileGUI-RL, a scalable framework that trains GUI agent in online environment. MobileGUI-RL contains two key components. It (i) synthesizes a curriculum of learnable tasks through self-exploration and filtering, and (ii) adapts GRPO to GUI navigation with trajectory-aware advantages and composite rewards that balance task success and execution efficiency. Experiments on three online mobile-agent benchmarks show consistent gains, validating the effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Enhancing mobile GUI agent scalability in online environments
Overcoming overfitting to specific UI templates in training
Improving policy robustness for unseen GUI environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trains GUI agent in online environment
Synthesizes curriculum through self-exploration
Adapts GRPO with trajectory-aware advantages
🔎 Similar Papers
No similar papers found.
Yucheng Shi
Yucheng Shi
University of Georgia
Synthetic DataData-centric AIResponsible AIExplainability
W
Wenhao Yu
Tencent AI Seattle Lab
Z
Zaitang Li
Tencent AI Seattle Lab, Chinese University of Hong Kong
Y
Yonglin Wang
Tencent AI Seattle Lab
H
Hongming Zhang
Tencent AI Seattle Lab
Ninghao Liu
Ninghao Liu
Assistant Professor, University of Georgia
Explainable AIFairness in Machine LearningGraph MiningAnomaly Detection
Haitao Mi
Haitao Mi
Principal Researcher, Tencent US
Large Language Models
D
Dong Yu
Tencent AI Seattle Lab