Towards Agentic RAG with Deep Reasoning: A Survey of RAG-Reasoning Systems in LLMs

📅 2025-07-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of retrieval-augmented generation (RAG) in multi-step reasoning and the factual unreliability and hallucination tendencies of pure reasoning models, this paper proposes RAG-Reasoning—a unified framework introducing a novel three-stage synergistic paradigm: (1) reasoning-enhanced RAG for targeted retrieval, (2) retrieval-enhanced reasoning for grounded factual support, and (3) agent-based iterative interaction for reflective refinement. The framework integrates multi-step reasoning, dynamic knowledge injection, agent-driven decision-making, and multimodal adaptation. It achieves state-of-the-art performance on multiple knowledge-intensive benchmarks—including HotpotQA, FEVER, and Musique. We publicly release Awesome-RAG-Reasoning, a curated open-source repository systematizing the technical landscape. This work significantly improves large language models’ factual accuracy, logical coherence, and trustworthiness in complex reasoning tasks, establishing a human-centered paradigm for deep, reliable reasoning systems.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) lifts the factuality of Large Language Models (LLMs) by injecting external knowledge, yet it falls short on problems that demand multi-step inference; conversely, purely reasoning-oriented approaches often hallucinate or mis-ground facts. This survey synthesizes both strands under a unified reasoning-retrieval perspective. We first map how advanced reasoning optimizes each stage of RAG (Reasoning-Enhanced RAG). Then, we show how retrieved knowledge of different type supply missing premises and expand context for complex inference (RAG-Enhanced Reasoning). Finally, we spotlight emerging Synergized RAG-Reasoning frameworks, where (agentic) LLMs iteratively interleave search and reasoning to achieve state-of-the-art performance across knowledge-intensive benchmarks. We categorize methods, datasets, and open challenges, and outline research avenues toward deeper RAG-Reasoning systems that are more effective, multimodally-adaptive, trustworthy, and human-centric. The collection is available at https://github.com/DavidZWZ/Awesome-RAG-Reasoning.
Problem

Research questions and friction points this paper is trying to address.

Enhancing RAG with multi-step reasoning for complex problems
Reducing hallucinations in reasoning-oriented approaches with retrieved knowledge
Developing synergistic RAG-reasoning frameworks for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Advanced reasoning optimizes RAG stages
Retrieved knowledge supplies missing premises
Agentic LLMs interleave search and reasoning
Y
Yangning Li
Tsinghua University
Weizhi Zhang
Weizhi Zhang
University of Illinois Chicago
PersonalizationLarge Language ModelsAgents
Y
Yuyao Yang
University of Illinois Chicago
Wei-Chieh Huang
Wei-Chieh Huang
University of Illinois Chicago
Natural language processing
Y
Yaozu Wu
The University of Tokyo
J
Junyu Luo
Peking University
Y
Yuanchen Bei
University of Illinois Urbana-Champaign
Henry Peng Zou
Henry Peng Zou
University of Illinois Chicago
AgentsLarge Language ModelsMultimodal LearningNatural Language Processing
X
Xiao Luo
University of California, Los Angeles
Y
Yusheng Zhao
Peking University
Chunkit Chan
Chunkit Chan
Ph.D. Student, HKUST | Applied Scientist Intern, Amazon, Palo Alto
Natural Language ProcessingLarge Language ModelsTheory of MindComputational Linguistics
Yankai Chen
Yankai Chen
Postdoctoral Associate, Cornell University
Information RetrievalKnowledge MiningLarge Language ModelsAgentic AI
Zhongfen Deng
Zhongfen Deng
University of Illinois Chicago
Natural Language ProcessingMulti-Modal ModelingAgentic AIMachine Learning
Y
Yinghui Li
Tsinghua University
H
Hai-Tao Zheng
Tsinghua University
D
Dongyuan Li
The University of Tokyo
Renhe Jiang
Renhe Jiang
The University of Tokyo
AISpatio-temporal Data MiningHuman MobilityGraph LearningTime Series Forecasting
M
Ming Zhang
Peking University
Yangqiu Song
Yangqiu Song
HKUST
Artificial IntelligenceData MiningNatural Language ProcessingKnowledge GraphsCommonsense Reasoning
Philip S. Yu
Philip S. Yu
Professor of Computer Science, University of Illinons at Chicago
Data miningDatabasePrivacy