π€ AI Summary
To address hallucination, knowledge staleness, and poor domain adaptability in large language models (LLMs), this paper conducts a systematic study of retrieval-augmented generation (RAG). We propose a full-stack RAG framework encompassing retriever design (dense, sparse, and hybrid), query rewriting, context fusion, LLM fine-tuning, and prompt engineering. We introduce the first taxonomy for dynamic knowledge updating and establish a multidimensional evaluation benchmark that balances academic rigor with industrial practicality. Additionally, we release a structured RAG knowledge graph and fully reproducible open-source code. Our contributions significantly enhance RAGβs robustness and maintainability in real-world deployments, providing both theoretical foundations and engineering best practices for knowledge-enhanced generative systems.
π Abstract
Large language models (LLMs) have demonstrated great success in various fields, benefiting from their huge amount of parameters that store knowledge. However, LLMs still suffer from several key issues, such as hallucination problems, knowledge update issues, and lacking domain-specific expertise. The appearance of retrieval-augmented generation (RAG), which leverages an external knowledge database to augment LLMs, makes up those drawbacks of LLMs. This paper reviews all significant techniques of RAG, especially in the retriever and the retrieval fusions. Besides, tutorial codes are provided for implementing the representative techniques in RAG. This paper further discusses the RAG update, including RAG with/without knowledge update. Then, we introduce RAG evaluation and benchmarking, as well as the application of RAG in representative NLP tasks and industrial scenarios. Finally, this paper discusses RAG's future directions and challenges for promoting this field's development.