OpAgent: Operator Agent for Web Navigation

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of existing autonomous web agents in dynamic real-world websites due to distributional shift. The authors propose OpAgent, a modular agent framework based on online reinforcement learning that continuously optimizes its policy through interaction with open web environments. OpAgent integrates a collaborative mechanism comprising a Planner, Grounder, Reflector, and Summarizer to enable integrated planning, execution, reflection, and self-correction. Key innovations include function-primitive-driven hierarchical multitask fine-tuning, a hybrid reward mechanism combining WebJudge with rule-based decision trees, and a modular architecture supporting error recovery. Evaluated on the WebArena benchmark, the method significantly improves task success rate from 38.1% (for an RL-augmented baseline) to 71.6%, establishing a new state-of-the-art for monolithic models.

Technology Category

Application Category

📝 Abstract
To fulfill user instructions, autonomous web agents must contend with the inherent complexity and volatile nature of real-world websites. Conventional paradigms predominantly rely on Supervised Fine-Tuning (SFT) or Offline Reinforcement Learning (RL) using static datasets. However, these methods suffer from severe distributional shifts, as offline trajectories fail to capture the stochastic state transitions and real-time feedback of unconstrained wide web environments. In this paper, we propose a robust Online Reinforcement Learning WebAgent, designed to optimize its policy through direct, iterative interactions with unconstrained wide websites. Our approach comprises three core innovations: 1) Hierarchical Multi-Task Fine-tuning: We curate a comprehensive mixture of datasets categorized by functional primitives -- Planning, Acting, and Grounding -- establishing a Vision-Language Model (VLM) with strong instruction-following capabilities for Web GUI tasks. 2) Online Agentic RL in the Wild: We develop an online interaction environment and fine-tune the VLM using a specialized RL pipeline. We introduce a Hybrid Reward Mechanism that combines a ground-truth-agnostic WebJudge for holistic outcome assessment with a Rule-based Decision Tree (RDT) for progress reward. This system effectively mitigates the credit assignment challenge in long-horizon navigation. Notably, our RL-enhanced model achieves a 38.1\% success rate (pass@5) on WebArena, outperforming all existing monolithic baselines. 3) Operator Agent: We introduce a modular agentic framework, namely \textbf{OpAgent}, orchestrating a Planner, Grounder, Reflector, and Summarizer. This synergy enables robust error recovery and self-correction, elevating the agent's performance to a new State-of-the-Art (SOTA) success rate of \textbf{71.6\%}.
Problem

Research questions and friction points this paper is trying to address.

web navigation
distributional shift
online reinforcement learning
autonomous web agents
real-world websites
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online Reinforcement Learning
Vision-Language Model
Operator Agent
Hybrid Reward Mechanism
Web Navigation
🔎 Similar Papers
No similar papers found.
Y
Yuyu Guo
Ant Group
W
Wenjie Yang
Ant Group
Siyuan Yang
Siyuan Yang
Wallenberg-NTU Presidential Postdoctoral Fellowship, Nanyang Technological University
Computer VisionAction Recognition
Ziyang Liu
Ziyang Liu
Research Fellow, Harvard Medical School; PhD, Tsinghua University
AI4BioGraph EmbeddingLarge Language Model
C
Cheng Chen
Ant Group
Y
Yuan Wei
Ant Group
Y
Yun Hu
Ant Group
Y
Yang Huang
Ant Group
G
Guoliang Hao
Ant Group
D
Dongsheng Yuan
Ant Group
J
Jianming Wang
Ant Group
X
Xin Chen
Ant Group
H
Hang Yu
Ant Group
L
Lei Lei
Ant Group
Peng Di
Peng Di
Senior Staff Engineer at Ant Group; Adjunct Associate Professor at UNSW Sydney
Parallel ComputingProgramming LanguageCompilerSoftware Engineering