🤖 AI Summary
Existing vision-based GUI agents rely on offline trajectory training, resulting in poor generalization, overfitting to fixed UI templates, and limited adaptability to novel environments. This paper introduces the first online reinforcement learning framework for mobile GUI agents, eliminating dependence on pre-collected datasets. Our approach comprises three key components: (1) a self-exploration-driven task curriculum generation mechanism enabling scalable continual learning; (2) an enhanced GRPO algorithm incorporating trajectory-aware advantage estimation and a composite reward function that jointly optimizes task success rate and execution efficiency; and (3) end-to-end visual GUI understanding integrated with real-time environmental interaction. Evaluated on three online mobile agent benchmarks, our method achieves significant improvements in task completion rate and cross-application generalization, demonstrating superior robustness and practicality.
📝 Abstract
Recently, there has been a surge of vision-based GUI agents designed to automate everyday mobile and web tasks. These agents interpret raw GUI screenshots and autonomously decide where to click, scroll, or type, which bypasses handcrafted rules and app-specific APIs. However, most existing methods trained GUI agent in the offline environment using pre-collected trajectories. This approach limits scalability, causes overfitting to specific UI templates, and leads to brittle policies when faced with unseen environment. We present MobileGUI-RL, a scalable framework that trains GUI agent in online environment. MobileGUI-RL contains two key components. It (i) synthesizes a curriculum of learnable tasks through self-exploration and filtering, and (ii) adapts GRPO to GUI navigation with trajectory-aware advantages and composite rewards that balance task success and execution efficiency. Experiments on three online mobile-agent benchmarks show consistent gains, validating the effectiveness of our approach.