๐ค AI Summary
Existing vision-language model (VLM)-based mobile agents predominantly rely on offline reinforcement learning (RL) or action-level online rewards, resulting in weak dynamic interaction capabilities, susceptibility to local optima, and insufficient exploration and error correction. To address these limitations, we propose a task-level reward-driven, three-stage interactive multi-round online RL frameworkโthe first to incorporate task-level rewards into VLM-based mobile agent training. Our approach integrates Group Relative Policy Optimization (GRPO), format-aware fine-tuning, and a dual-granularity reward mechanism that jointly supervises both atomic actions and high-level task completion. We introduce a high-quality benchmark comprising 24,521 instructions across 28 Chinese mobile applications and 500 expert trajectories. Experiments demonstrate substantial improvements in task success rate and environmental adaptability. All code, data, and models are publicly released.
๐ Abstract
Vision-language model-based mobile agents have gained the ability to not only understand complex instructions and mobile screenshots, but also optimize their action outputs via thinking and reasoning, benefiting from reinforcement learning, such as Group Relative Policy Optimization (GRPO). However, existing research centers on offline reinforcement learning training or online optimization using action-level rewards, which limits the agent's dynamic interaction with the environment. This often results in agents settling into local optima, thereby weakening their ability for exploration and error action correction. To address these challenges, we introduce an approach called Mobile-R1, which employs interactive multi-turn reinforcement learning with task-level rewards for mobile agents. Our training framework consists of three stages: initial format finetuning, single-step online training via action-level reward, followed by online training via task-level reward based on multi-turn trajectories. This strategy is designed to enhance the exploration and error correction capabilities of Mobile-R1, leading to significant performance improvements. Moreover, we have collected a dataset covering 28 Chinese applications with 24,521 high-quality manual annotations and established a new benchmark with 500 trajectories. We will open source all resources, including the dataset, benchmark, model weight, and codes: https://mobile-r1.github.io/Mobile-R1/.