WebAnchor: Anchoring Agent Planning to Stabilize Long-Horizon Web Reasoning

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing reinforcement learning approaches in long-horizon web reasoning tasks, where uniform reward allocation overlooks the critical role of initial-step planning. To remedy this, the authors propose Anchor-GRPO, a two-stage framework: the first stage optimizes the initial “planning anchor” through self-play and human calibration, while the second stage aligns execution with the initial plan using sparse rewards. This method is the first to explicitly identify and leverage the “planning anchor” phenomenon, decoupling planning from execution and integrating fine-grained scoring with a sparse reward mechanism. Evaluated on four benchmarks—including BrowseComp and GAIA—the approach significantly outperforms existing baselines, with WebAnchor-30B achieving 46.0% pass@1 on BrowseComp and 76.4% on GAIA, demonstrating consistent performance gains with increased model scale and context length.

Technology Category

Application Category

📝 Abstract
Large Language Model(LLM)-based agents have shown strong capabilities in web information seeking, with reinforcement learning (RL) becoming a key optimization paradigm. However, planning remains a bottleneck, as existing methods struggle with long-horizon strategies. Our analysis reveals a critical phenomenon, plan anchor, where the first reasoning step disproportionately impacts downstream behavior in long-horizon web reasoning tasks. Current RL algorithms, fail to account for this by uniformly distributing rewards across the trajectory. To address this, we propose Anchor-GRPO, a two-stage RL framework that decouples planning and execution. In Stage 1, the agent optimizes its first-step planning using fine-grained rubrics derived from self-play experiences and human calibration. In Stage 2, execution is aligned with the initial plan through sparse rewards, ensuring stable and efficient tool usage. We evaluate Anchor-GRPO on four benchmarks: BrowseComp, BrowseComp-Zh, GAIA, and XBench-DeepSearch. Across models from 3B to 30B, Anchor-GRPO outperforms baseline GRPO and First-step GRPO, improving task success and tool efficiency. Notably, WebAnchor-30B achieves 46.0% pass@1 on BrowseComp and 76.4% on GAIA. Anchor-GRPO also demonstrates strong scalability, getting higher accuracy as model size and context length increase.
Problem

Research questions and friction points this paper is trying to address.

plan anchor
long-horizon web reasoning
reinforcement learning
agent planning
reward distribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

plan anchor
Anchor-GRPO
long-horizon web reasoning
two-stage RL
LLM-based agents