🤖 AI Summary
This work addresses the performance degradation of existing autonomous web agents in dynamic real-world websites due to distributional shift. The authors propose OpAgent, a modular agent framework based on online reinforcement learning that continuously optimizes its policy through interaction with open web environments. OpAgent integrates a collaborative mechanism comprising a Planner, Grounder, Reflector, and Summarizer to enable integrated planning, execution, reflection, and self-correction. Key innovations include function-primitive-driven hierarchical multitask fine-tuning, a hybrid reward mechanism combining WebJudge with rule-based decision trees, and a modular architecture supporting error recovery. Evaluated on the WebArena benchmark, the method significantly improves task success rate from 38.1% (for an RL-augmented baseline) to 71.6%, establishing a new state-of-the-art for monolithic models.
📝 Abstract
To fulfill user instructions, autonomous web agents must contend with the inherent complexity and volatile nature of real-world websites. Conventional paradigms predominantly rely on Supervised Fine-Tuning (SFT) or Offline Reinforcement Learning (RL) using static datasets. However, these methods suffer from severe distributional shifts, as offline trajectories fail to capture the stochastic state transitions and real-time feedback of unconstrained wide web environments. In this paper, we propose a robust Online Reinforcement Learning WebAgent, designed to optimize its policy through direct, iterative interactions with unconstrained wide websites. Our approach comprises three core innovations: 1) Hierarchical Multi-Task Fine-tuning: We curate a comprehensive mixture of datasets categorized by functional primitives -- Planning, Acting, and Grounding -- establishing a Vision-Language Model (VLM) with strong instruction-following capabilities for Web GUI tasks. 2) Online Agentic RL in the Wild: We develop an online interaction environment and fine-tune the VLM using a specialized RL pipeline. We introduce a Hybrid Reward Mechanism that combines a ground-truth-agnostic WebJudge for holistic outcome assessment with a Rule-based Decision Tree (RDT) for progress reward. This system effectively mitigates the credit assignment challenge in long-horizon navigation. Notably, our RL-enhanced model achieves a 38.1\% success rate (pass@5) on WebArena, outperforming all existing monolithic baselines. 3) Operator Agent: We introduce a modular agentic framework, namely \textbf{OpAgent}, orchestrating a Planner, Grounder, Reflector, and Summarizer. This synergy enables robust error recovery and self-correction, elevating the agent's performance to a new State-of-the-Art (SOTA) success rate of \textbf{71.6\%}.