Mitigating Hallucination in Large Language Models (LLMs): An Application-Oriented Survey on RAG, Reasoning, and Agentic Systems

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Hallucination remains a critical reliability challenge in the practical deployment of large language models (LLMs). Method: This work introduces, for the first time, a dual-dimensional taxonomy of hallucinations—distinguishing *knowledge-based* and *logic-based* types—and proposes a unified framework integrating retrieval-augmented generation (RAG), chain-of-thought (CoT) reinforcement, and agent-based system orchestration to systematically mitigate them. We analyze the intrinsic mechanisms by which each component suppresses distinct hallucination categories. Contribution/Results: Through rigorous empirical evaluation on standardized benchmarks, we systematically characterize the suppression pathways of each technique across hallucination types. The study delivers a reusable, modular paradigm for enhancing LLM reliability and a standardized evaluation framework. Our approach significantly improves both factual accuracy and operational feasibility—bridging the gap between theoretical robustness and real-world deployment.

Technology Category

Application Category

📝 Abstract
Hallucination remains one of the key obstacles to the reliable deployment of large language models (LLMs), particularly in real-world applications. Among various mitigation strategies, Retrieval-Augmented Generation (RAG) and reasoning enhancement have emerged as two of the most effective and widely adopted approaches, marking a shift from merely suppressing hallucinations to balancing creativity and reliability. However, their synergistic potential and underlying mechanisms for hallucination mitigation have not yet been systematically examined. This survey adopts an application-oriented perspective of capability enhancement to analyze how RAG, reasoning enhancement, and their integration in Agentic Systems mitigate hallucinations. We propose a taxonomy distinguishing knowledge-based and logic-based hallucinations, systematically examine how RAG and reasoning address each, and present a unified framework supported by real-world applications, evaluations, and benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Mitigating hallucinations in LLMs for reliable real-world deployment
Examining synergistic potential of RAG and reasoning enhancement
Proposing taxonomy for knowledge-based and logic-based hallucinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-Augmented Generation mitigates knowledge-based hallucinations
Reasoning enhancement addresses logic-based hallucinations in models
Agentic Systems integrate RAG and reasoning for reliability
Y
Yihan Li
Electronic Information School, Wuhan University, Wuhan, China, and the School of Computing, Dublin City University, Dublin, Ireland
X
Xiyuan Fu
School of Public Health, Wuhan University, Wuhan, China
G
Ghanshyam Verma
Insight Centre for Data Analytics, University of Galway, Ireland
Paul Buitelaar
Paul Buitelaar
Professor in Data Analytics, Data Science Institute, Univ of Galway, Co-PI Insight Centre
Natural Language ProcessingKnowledge GraphsText MiningSemantics
M
Mingming Liu
Insight Centre for Data Analytics, Dublin City University, Dublin, Ireland