Beyond Ten Turns: Unlocking Long-Horizon Agentic Search with Large-Scale Asynchronous RL

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing open-source LLM agents struggle with complex, knowledge-intensive search tasks—particularly in handling ambiguous queries, generating precise retrieval expressions, deeply analyzing results, and conducting sustained exploration—due to constraints on interaction length (≤10 steps), low training efficiency, and poor synthetic data quality. Method: We propose the first full-asynchronous reinforcement learning framework supporting ultra-long-horizon reasoning, enabling end-to-end autonomous search with >40 tool invocations and 150K-token outputs. It requires no external large language model; instead, it leverages prompt engineering and QwQ-32B to establish a scalable, self-contained training loop that autonomously generates high-quality question-answer pairs. Contribution/Results: Our method achieves Avg@4 scores of 42.1 on xBench and 52.8 on GAIA—substantially surpassing prior open-source 32B models—and breaks the long-horizon search policy learning bottleneck.

Technology Category

Application Category

📝 Abstract
Recent advancements in LLM-based agents have demonstrated remarkable capabilities in handling complex, knowledge-intensive tasks by integrating external tools. Among diverse choices of tools, search tools play a pivotal role in accessing vast external knowledge. However, open-source agents still fall short of achieving expert-level Search Intelligence, the ability to resolve ambiguous queries, generate precise searches, analyze results, and conduct thorough exploration. Existing approaches fall short in scalability, efficiency, and data quality. For example, small turn limits in existing online RL methods, e.g. <=10, restrict complex strategy learning. This paper introduces ASearcher, an open-source project for large-scale RL training of search agents. Our key contributions include: (1) Scalable fully asynchronous RL training that enables long-horizon search while maintaining high training efficiency. (2) A prompt-based LLM agent that autonomously synthesizes high-quality and challenging QAs, creating a large-scale QA dataset. Through RL training, our prompt-based QwQ-32B agent achieves substantial improvements, with 46.7% and 20.8% Avg@4 gains on xBench and GAIA, respectively. Notably, our agent exhibits extreme long-horizon search, with tool calls exceeding 40 turns and output tokens exceeding 150k during training time. With a simple agent design and no external LLMs, ASearcher-Web-QwQ achieves Avg@4 scores of 42.1 on xBench and 52.8 on GAIA, surpassing existing open-source 32B agents. We open-source our models, training data, and codes in https://github.com/inclusionAI/ASearcher.
Problem

Research questions and friction points this paper is trying to address.

Overcoming scalability and efficiency limitations in existing search agent training methods
Enabling long-horizon search capabilities beyond small turn limits
Improving search intelligence for resolving ambiguous queries and thorough exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale asynchronous RL training for long-horizon search
Prompt-based LLM agent synthesizes high-quality QA dataset
Achieves extreme long-horizon search exceeding 40 tool turns
🔎 Similar Papers
No similar papers found.