🤖 AI Summary
Existing GUI reinforcement learning approaches face two key challenges: (1) neglecting task difficulty heterogeneity, leading to poor training adaptability, and (2) relying on coarse-grained reward signals, resulting in inefficient policy updates. To address these, we propose a curriculum-based fine-grained optimization framework comprising three core components: (1) a trajectory difficulty grouping mechanism for adaptive task difficulty ranking; (2) a multi-signal reward function integrating rule-based priors and model-driven judgments to enhance feedback precision; and (3) Group Relative Policy Optimization (GRPO), a novel algorithm enabling dynamic curriculum adjustment and stable policy optimization. Evaluated on the Android Control public benchmark, our method achieves a 5.6% absolute improvement over prior state-of-the-art methods; on an internal online benchmark, it yields a 10.3% gain in success rate. Overall, the framework significantly boosts success rates across diverse GUI navigation tasks.
📝 Abstract
As autonomous agents become adept at understanding and interacting with graphical user interface (GUI) environments, a new era of automated task execution is emerging. Recent studies have demonstrated that Reinforcement Learning (RL) can effectively enhance agents' performance in dynamic interactive GUI environments. However, these methods face two key limitations: (1) they overlook the significant variation in difficulty across different GUI tasks by treating the entire training data as a uniform set, which hampers the agent's ability to adapt its learning process; and (2) most approaches collapse task-specific nuances into a single, coarse reward, leaving the agent with a uniform signal that yields inefficient policy updates. To address these limitations, we propose CRAFT-GUI, a curriculum learning framework based on Group Relative Policy Optimization (GRPO) that explicitly accounts for the varying difficulty across trajectories. To enable more fine-grained policy optimization, we design a reward function that combines simple rule-based signals with model-judged evaluation, providing richer and more nuanced feedback during training. Experimental results demonstrate that our method achieves significant improvements over previous state-of-the-art approaches, outperforming them by 5.6% on public benchmarks Android Control and 10.3% on our internal online benchmarks, respectively. These findings empirically validate the effectiveness of integrating reinforcement learning with curriculum learning in GUI interaction tasks.