Mobile-R1: Towards Interactive Reinforcement Learning for VLM-Based Mobile Agent via Task-Level Rewards

๐Ÿ“… 2025-06-25
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing vision-language model (VLM)-based mobile agents predominantly rely on offline reinforcement learning (RL) or action-level online rewards, resulting in weak dynamic interaction capabilities, susceptibility to local optima, and insufficient exploration and error correction. To address these limitations, we propose a task-level reward-driven, three-stage interactive multi-round online RL frameworkโ€”the first to incorporate task-level rewards into VLM-based mobile agent training. Our approach integrates Group Relative Policy Optimization (GRPO), format-aware fine-tuning, and a dual-granularity reward mechanism that jointly supervises both atomic actions and high-level task completion. We introduce a high-quality benchmark comprising 24,521 instructions across 28 Chinese mobile applications and 500 expert trajectories. Experiments demonstrate substantial improvements in task success rate and environmental adaptability. All code, data, and models are publicly released.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision-language model-based mobile agents have gained the ability to not only understand complex instructions and mobile screenshots, but also optimize their action outputs via thinking and reasoning, benefiting from reinforcement learning, such as Group Relative Policy Optimization (GRPO). However, existing research centers on offline reinforcement learning training or online optimization using action-level rewards, which limits the agent's dynamic interaction with the environment. This often results in agents settling into local optima, thereby weakening their ability for exploration and error action correction. To address these challenges, we introduce an approach called Mobile-R1, which employs interactive multi-turn reinforcement learning with task-level rewards for mobile agents. Our training framework consists of three stages: initial format finetuning, single-step online training via action-level reward, followed by online training via task-level reward based on multi-turn trajectories. This strategy is designed to enhance the exploration and error correction capabilities of Mobile-R1, leading to significant performance improvements. Moreover, we have collected a dataset covering 28 Chinese applications with 24,521 high-quality manual annotations and established a new benchmark with 500 trajectories. We will open source all resources, including the dataset, benchmark, model weight, and codes: https://mobile-r1.github.io/Mobile-R1/.
Problem

Research questions and friction points this paper is trying to address.

Enhancing mobile agents' exploration and error correction via task-level rewards
Overcoming local optima in VLM-based mobile agent reinforcement learning
Developing interactive multi-turn RL for dynamic environment interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive multi-turn reinforcement learning
Task-level rewards for mobile agents
Three-stage training framework
๐Ÿ”Ž Similar Papers
No similar papers found.
Jihao Gu
Jihao Gu
University College London
Computer Vision
Q
Qihang Ai
Taobao & Tmall Group of Alibaba
Yingyao Wang
Yingyao Wang
Alibaba Group, Harbin Institute of Technology
LVLMQuestion AnsweringKnowledge Reasoning
P
Pi Bu
Taobao & Tmall Group of Alibaba
J
Jingxuan Xing
Taobao & Tmall Group of Alibaba
Z
Zekun Zhu
Taobao & Tmall Group of Alibaba
W
Wei Jiang
Taobao & Tmall Group of Alibaba
Z
Ziming Wang
Taobao & Tmall Group of Alibaba
Y
Yingxiu Zhao
Taobao & Tmall Group of Alibaba
Ming-Liang Zhang
Ming-Liang Zhang
PhD, Senior Algorithm Engineer at Alibaba Beijing
Multimodal ReasoningMath Problem SolvingScene Parsing
Jun Song
Jun Song
Shenzhen University
nanophotonics
Y
Yuning Jiang
Taobao & Tmall Group of Alibaba
B
Bo Zheng
Taobao & Tmall Group of Alibaba