๐ค AI Summary
Mobile GUI agents face two key challenges in reinforcement learning: a heavy-tailed distribution of task difficulty and low environment sampling efficiency. To address these, this paper proposes ADAGRPO, a difficulty-adaptive online RL framework that synergistically integrates vision-language models (VLMs) with RL. Methodologically, it introduces three novel components: positive-example replay, failure-based curriculum filtering, and shortest-path reward shapingโenabling dynamic difficulty-aware curriculum learning and stable policy optimization. Evaluated on AndroidWorld and AndroidLab benchmarks, ADAGRPO achieves state-of-the-art success rates of 75.8% and 46.8%, respectively, significantly outperforming prior approaches. The framework has been integrated into the AutoGLM production system and open-sourced, advancing the practical deployment of general-purpose mobile GUI agents.
๐ Abstract
Building general-purpose graphical user interface (GUI) agents has become increasingly promising with the progress in vision language models. However, developing effective mobile GUI agents with reinforcement learning (RL) remains challenging due to the heavy-tailed distribution of task difficulty and the inefficiency of large-scale environment sampling. We present an online agentic reinforcement learning framework MOBILERL to enhance GUI agents in mobile environments. Its core component is the Difficulty-Adaptive GRPO (ADAGRPO) algorithm. In ADAGRPO, we design difficulty-adaptive positive replay and failure curriculum filtering to adapt the model to different task difficulties. We introduce the shortest path reward adjustment strategy to reshape rewards concerning the task length in multi-turn agentic tasks. Those strategies jointly stabilize RL training, improve sample efficiency, and generate strong performance across diverse mobile apps and tasks. We apply MOBILERL to two open models (Qwen2.5-VL-7B-Instruct and GLM-4.1V-9B-Base). The resultant MOBILERL-9B model achieves state-of-the-art results in terms of success rates on both AndroidWorld (75.8%) and AndroidLab (46.8%). The MOBILERL framework is adopted in the AutoGLM products, and also open-sourced at https://github.com/THUDM/MobileRL.