SAGE: Steerable Agentic Data Generation for Deep Search with Execution Feedback

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost and scarcity of human-annotated complex multi-hop question answering (QA) data, which hinders the training of deep-search agents. To overcome this limitation, the authors propose SAGE, a framework that automatically synthesizes high-quality QA pairs of controllable difficulty through iterative interactions between a data generator and a search agent, leveraging execution feedback. SAGE is the first approach to enable difficulty-controllable data generation grounded in execution feedback, allowing seamless transfer from a fixed corpus to open-web search without additional training. By integrating agent collaboration, feedback loops, difficulty modulation, and data refinement mechanisms, SAGE significantly enhances both answer accuracy and reasoning diversity, achieving up to a 23% relative performance improvement on standard benchmarks.

Technology Category

Application Category

📝 Abstract
Deep search agents, which aim to answer complex questions requiring reasoning across multiple documents, can significantly speed up the information-seeking process. Collecting human annotations for this application is prohibitively expensive due to long and complex exploration trajectories. We propose an agentic pipeline that automatically generates high quality, difficulty-controlled deep search question-answer pairs for a given corpus and a target difficulty level. Our pipeline, SAGE, consists of a data generator which proposes QA pairs and a search agent which attempts to solve the generated question and provide execution feedback for the data generator. The two components interact over multiple rounds to iteratively refine the question-answer pairs until they satisfy the target difficulty level. Our intrinsic evaluation shows SAGE generates questions that require diverse reasoning strategies, while significantly increases the correctness and difficulty of the generated data. Our extrinsic evaluation demonstrates up to 23% relative performance gain on popular deep search benchmarks by training deep search agents with our synthetic data. Additional experiments show that agents trained on our data can adapt from fixed-corpus retrieval to Google Search at inference time, without further training.
Problem

Research questions and friction points this paper is trying to address.

deep search
data generation
question answering
difficulty control
synthetic data
Innovation

Methods, ideas, or system contributions that make the work stand out.

agentic data generation
deep search
execution feedback
difficulty-controlled QA
synthetic data
🔎 Similar Papers
No similar papers found.