Explore to Evolve: Scaling Evolved Aggregation Logic via Proactive Online Exploration for Deep Research Agents

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing open-source deep research agents emphasize information retrieval while neglecting the critical capability of multi-source information aggregation. This work introduces the “Exploration-as-Evolution” paradigm, wherein research agents actively explore the live web to autonomously evolve verifiable knowledge-aggregation programs—marking the first systematic enhancement of information integration and logical reasoning in research-oriented agents. Methodologically, we design a self-evolving framework grounded in higher-order logical type composition, supporting 12 high-level logical operations, and integrate it with the SmolAgents framework for supervised fine-tuning trajectory collection and model training. Contributions include: (i) releasing WebAggregatorQA—a large-scale, multi-domain, verifiable benchmark (10K samples, covering 50K websites across 11 domains)—and its human-annotated evaluation set; (ii) training WebAggregator-8B, matching GPT-4.1 performance, and WebAggregator-32B, surpassing GPT-4.1 by >10% on GAIA-text and approaching Claude-3.7-Sonnet.

Technology Category

Application Category

📝 Abstract
Deep research web agents not only retrieve information from diverse sources such as web environments, files, and multimodal inputs, but more importantly, they need to rigorously analyze and aggregate knowledge for insightful research. However, existing open-source deep research agents predominantly focus on enhancing information-seeking capabilities of web agents to locate specific information, while overlooking the essential need for information aggregation, which would limit their ability to support in-depth research. We propose an Explore to Evolve paradigm to scalably construct verifiable training data for web agents. Begins with proactive online exploration, an agent sources grounded information by exploring the real web. Using the collected evidence, the agent then self-evolves an aggregation program by selecting, composing, and refining operations from 12 high-level logical types to synthesize a verifiable QA pair. This evolution from high-level guidance to concrete operations allowed us to scalably produce WebAggregatorQA, a dataset of 10K samples across 50K websites and 11 domains. Based on an open-source agent framework, SmolAgents, we collect supervised fine-tuning trajectories to develop a series of foundation models, WebAggregator. WebAggregator-8B matches the performance of GPT-4.1, while the 32B variant surpasses GPT-4.1 by more than 10% on GAIA-text and closely approaches Claude-3.7-sonnet. Moreover, given the limited availability of benchmarks that evaluate web agents' information aggregation abilities, we construct a human-annotated evaluation split of WebAggregatorQA as a challenging test set. On this benchmark, Claude-3.7-sonnet only achieves 28%, and GPT-4.1 scores 25.8%. Even when agents manage to retrieve all references, they still struggle on WebAggregatorQA, highlighting the need to strengthen the information aggregation capabilities of web agent foundations.
Problem

Research questions and friction points this paper is trying to address.

Existing web agents lack information aggregation capabilities for deep research
Need scalable methods to create verifiable training data for web agents
Limited benchmarks exist to evaluate web agents' information aggregation abilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proactive online exploration collects grounded web evidence
Self-evolving aggregation program refines 12 logical operations
Scalable dataset generation enables verifiable QA synthesis
🔎 Similar Papers
No similar papers found.