MobileRL: Online Agentic Reinforcement Learning for Mobile GUI Agents

๐Ÿ“… 2025-09-10
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 2
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Mobile GUI agents face two key challenges in reinforcement learning: a heavy-tailed distribution of task difficulty and low environment sampling efficiency. To address these, this paper proposes ADAGRPO, a difficulty-adaptive online RL framework that synergistically integrates vision-language models (VLMs) with RL. Methodologically, it introduces three novel components: positive-example replay, failure-based curriculum filtering, and shortest-path reward shapingโ€”enabling dynamic difficulty-aware curriculum learning and stable policy optimization. Evaluated on AndroidWorld and AndroidLab benchmarks, ADAGRPO achieves state-of-the-art success rates of 75.8% and 46.8%, respectively, significantly outperforming prior approaches. The framework has been integrated into the AutoGLM production system and open-sourced, advancing the practical deployment of general-purpose mobile GUI agents.

Technology Category

Application Category

๐Ÿ“ Abstract
Building general-purpose graphical user interface (GUI) agents has become increasingly promising with the progress in vision language models. However, developing effective mobile GUI agents with reinforcement learning (RL) remains challenging due to the heavy-tailed distribution of task difficulty and the inefficiency of large-scale environment sampling. We present an online agentic reinforcement learning framework MOBILERL to enhance GUI agents in mobile environments. Its core component is the Difficulty-Adaptive GRPO (ADAGRPO) algorithm. In ADAGRPO, we design difficulty-adaptive positive replay and failure curriculum filtering to adapt the model to different task difficulties. We introduce the shortest path reward adjustment strategy to reshape rewards concerning the task length in multi-turn agentic tasks. Those strategies jointly stabilize RL training, improve sample efficiency, and generate strong performance across diverse mobile apps and tasks. We apply MOBILERL to two open models (Qwen2.5-VL-7B-Instruct and GLM-4.1V-9B-Base). The resultant MOBILERL-9B model achieves state-of-the-art results in terms of success rates on both AndroidWorld (75.8%) and AndroidLab (46.8%). The MOBILERL framework is adopted in the AutoGLM products, and also open-sourced at https://github.com/THUDM/MobileRL.
Problem

Research questions and friction points this paper is trying to address.

Enhancing mobile GUI agents through adaptive reinforcement learning
Addressing task difficulty distribution and sampling inefficiency challenges
Improving training stability and performance across diverse applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Difficulty-adaptive replay and filtering for task adaptation
Shortest-path reward adjustment for multi-turn tasks
Online agentic RL framework for mobile GUI agents
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yifan Xu
Tsinghua University
X
Xiao Liu
Tsinghua University
Xinghan Liu
Xinghan Liu
Tsinghua University
J
Jiaqi Fu
Tsinghua University
H
Hanchen Zhang
Tsinghua University
B
Bohao Jing
Z.AI
Shudan Zhang
Shudan Zhang
tsinghua
nlp
Y
Yuting Wang
Z.AI
Wenyi Zhao
Wenyi Zhao
Z.AI
Yuxiao Dong
Yuxiao Dong
CS, Tsinghua University
Large Language ModelsVision Language ModelsLLM ReasoningLLM AgentGraph Machine Learning