From Web Search towards Agentic Deep Research: Incentivizing Search with Reasoning Agents

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional keyword search struggles with multi-step, complex information needs. To address this, we propose “Agent-Augmented Deep Research”—a novel paradigm that employs large language model–based reasoning agents to perform end-to-end deep research via autonomous task planning, iterative retrieval, knowledge integration, and dynamic feedback. We formally introduce the “test-time computational depth scaling law,” the first theoretical characterization of how computational depth in reasoning and search governs performance. Our system integrates open-source toolchains with a unified benchmarking framework and achieves significant improvements over state-of-the-art methods across multiple complex question-answering and research-oriented benchmarks (average +23.6%). This work advances information retrieval from passive pattern matching toward autonomous, agent-driven investigative reasoning, establishing both theoretical foundations and practical pathways for next-generation knowledge acquisition systems.

Technology Category

Application Category

📝 Abstract
Information retrieval is a cornerstone of modern knowledge acquisition, enabling billions of queries each day across diverse domains. However, traditional keyword-based search engines are increasingly inadequate for handling complex, multi-step information needs. Our position is that Large Language Models (LLMs), endowed with reasoning and agentic capabilities, are ushering in a new paradigm termed Agentic Deep Research. These systems transcend conventional information search techniques by tightly integrating autonomous reasoning, iterative retrieval, and information synthesis into a dynamic feedback loop. We trace the evolution from static web search to interactive, agent-based systems that plan, explore, and learn. We also introduce a test-time scaling law to formalize the impact of computational depth on reasoning and search. Supported by benchmark results and the rise of open-source implementations, we demonstrate that Agentic Deep Research not only significantly outperforms existing approaches, but is also poised to become the dominant paradigm for future information seeking. All the related resources, including industry products, research papers, benchmark datasets, and open-source implementations, are collected for the community in https://github.com/DavidZWZ/Awesome-Deep-Research.
Problem

Research questions and friction points this paper is trying to address.

Transition from keyword search to reasoning-based agentic deep research
Address complex multi-step information needs with LLMs
Improve information retrieval via dynamic reasoning and synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs with reasoning and agentic capabilities
Autonomous reasoning and iterative retrieval integration
Test-time scaling law for computational depth impact
🔎 Similar Papers
Weizhi Zhang
Weizhi Zhang
University of Illinois Chicago
PersonalizationLarge Language ModelsAgents
Y
Yangning Li
Tsinghua University
Y
Yuanchen Bei
University of Illinois Urbana-Champaign
J
Junyu Luo
Peking University
Guancheng Wan
Guancheng Wan
Computer Science, UCLA
AI AgentAI4ScienceLarge Language ModelTrustworthy AI
Liangwei Yang
Liangwei Yang
Salesforce Research
Network ScienceRecommender SystemEfficient Modeling
Chenxuan Xie
Chenxuan Xie
Zhejiang University of Technology
graphdata mining
Y
Yuyao Yang
University of Illinois Chicago
Wei-Chieh Huang
Wei-Chieh Huang
University of Illinois Chicago
Natural language processing
Chunyu Miao
Chunyu Miao
University of Illinois at Chicago
LLMcode generation
Henry Peng Zou
Henry Peng Zou
University of Illinois Chicago
AgentsLarge Language ModelsMultimodal LearningNatural Language Processing
X
Xiao Luo
University of California, Los Angeles
Y
Yusheng Zhao
Peking University
Yankai Chen
Yankai Chen
Postdoctoral Associate, Cornell University
Information RetrievalKnowledge MiningLarge Language ModelsAgentic AI
Chunkit Chan
Chunkit Chan
Ph.D. Student, HKUST | Applied Scientist Intern, Amazon, Palo Alto
Natural Language ProcessingLarge Language ModelsTheory of MindComputational Linguistics
Peilin Zhou
Peilin Zhou
HKUST; Peking University
sequential recommendationnatural language processing
X
Xinyang Zhang
Amazon
C
Chenwei Zhang
Amazon
Jingbo Shang
Jingbo Shang
Associate Professor, UC San Diego
Natural Language ProcessingData MiningDeep LearningInformation ExtractionWeak Supervision
M
Ming Zhang
Peking University
Yangqiu Song
Yangqiu Song
HKUST
Artificial IntelligenceData MiningNatural Language ProcessingKnowledge GraphsCommonsense Reasoning
Irwin King
Irwin King
The Chinese University of Hong Kong
social computingmachine learningAIgraph neural networksNLP
Philip S. Yu
Philip S. Yu
Professor of Computer Science, University of Illinons at Chicago
Data miningDatabasePrivacy