SimpleDeepSearcher: Deep Information Seeking via Web-Powered Reasoning Trajectory Synthesis

📅 2025-05-22
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing RAG systems face three key challenges in multi-step deep search: low-quality training data, distributional shift between simulated environments and real-world deployment, and high operational costs. This paper proposes a lightweight and efficient framework featuring the first data synthesis method grounded in authentic Web interactions, coupled with a multi-criteria trajectory filtering strategy to circumvent the high sample complexity and distribution mismatch inherent in reinforcement learning. Using only 871 carefully curated trajectories for supervised fine-tuning (SFT), our approach significantly outperforms RL-based baselines across five cross-domain deep search benchmarks. Our core contributions are: (1) establishing the first high-fidelity trajectory synthesis paradigm tailored to realistic web environments; (2) empirically validating the efficacy of SFT under extreme data scarcity; and (3) providing a practical, deployable pathway for deep search systems.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) systems have advanced large language models (LLMs) in complex deep search scenarios requiring multi-step reasoning and iterative information retrieval. However, existing approaches face critical limitations that lack high-quality training trajectories or suffer from the distributional mismatches in simulated environments and prohibitive computational costs for real-world deployment. This paper introduces SimpleDeepSearcher, a lightweight yet effective framework that bridges this gap through strategic data engineering rather than complex training paradigms. Our approach synthesizes high-quality training data by simulating realistic user interactions in live web search environments, coupled with a multi-criteria curation strategy that optimizes the diversity and quality of input and output side. Experiments on five benchmarks across diverse domains demonstrate that SFT on only 871 curated samples yields significant improvements over RL-based baselines. Our work establishes SFT as a viable pathway by systematically addressing the data-scarce bottleneck, offering practical insights for efficient deep search systems. Our code is available at https://github.com/RUCAIBox/SimpleDeepSearcher.
Problem

Research questions and friction points this paper is trying to address.

Addresses lack of high-quality training trajectories in RAG systems
Reduces distributional mismatches in simulated search environments
Minimizes computational costs for real-world deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulates realistic web search interactions for data
Uses multi-criteria curation for diverse high-quality samples
Achieves strong performance with minimal supervised fine-tuning
🔎 Similar Papers
No similar papers found.
S
Shuang Sun
School of Computer Science and Engineering, Northeastern University
Huatong Song
Huatong Song
GSAI, Renmin University of China
Large Language Models
Y
Yuhao Wang
Gaoling School of Artificial Intelligence, Renmin University of China
Ruiyang Ren
Ruiyang Ren
Renmin University of China
Information RetrievalNatural Language ProcessingLarge Language Models
Jinhao Jiang
Jinhao Jiang
Phd student of CS, Renmin University of China
NLPLLMsComplex ReasoningAgent
J
Junjie Zhang
Gaoling School of Artificial Intelligence, Renmin University of China
F
Fei Bai
School of Informatics, Xiamen University
J
Jia Deng
Gaoling School of Artificial Intelligence, Renmin University of China
Wayne Xin Zhao
Wayne Xin Zhao
Professor, Renmin University of China
Recommender SystemNatural Language ProcessingLarge Language Model
Z
Zheng Liu
Beijing Academy of Artificial Intelligence
L
Lei Fang
DataCanvas Alaya NeW
Z
Zhongyuan Wang
Beijing Academy of Artificial Intelligence
Ji-Rong Wen
Ji-Rong Wen
Gaoling School of Artificial Intelligence, Renmin University of China
Large Language ModelWeb SearchInformation RetrievalMachine Learning