Retrieval-Augmented Generation for Natural Language Processing: A Survey

πŸ“… 2024-07-18
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 15
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address hallucination, knowledge staleness, and poor domain adaptability in large language models (LLMs), this paper conducts a systematic study of retrieval-augmented generation (RAG). We propose a full-stack RAG framework encompassing retriever design (dense, sparse, and hybrid), query rewriting, context fusion, LLM fine-tuning, and prompt engineering. We introduce the first taxonomy for dynamic knowledge updating and establish a multidimensional evaluation benchmark that balances academic rigor with industrial practicality. Additionally, we release a structured RAG knowledge graph and fully reproducible open-source code. Our contributions significantly enhance RAG’s robustness and maintainability in real-world deployments, providing both theoretical foundations and engineering best practices for knowledge-enhanced generative systems.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) have demonstrated great success in various fields, benefiting from their huge amount of parameters that store knowledge. However, LLMs still suffer from several key issues, such as hallucination problems, knowledge update issues, and lacking domain-specific expertise. The appearance of retrieval-augmented generation (RAG), which leverages an external knowledge database to augment LLMs, makes up those drawbacks of LLMs. This paper reviews all significant techniques of RAG, especially in the retriever and the retrieval fusions. Besides, tutorial codes are provided for implementing the representative techniques in RAG. This paper further discusses the RAG update, including RAG with/without knowledge update. Then, we introduce RAG evaluation and benchmarking, as well as the application of RAG in representative NLP tasks and industrial scenarios. Finally, this paper discusses RAG's future directions and challenges for promoting this field's development.
Problem

Research questions and friction points this paper is trying to address.

Addresses hallucination and knowledge update issues in LLMs.
Explores retrieval-augmented generation to enhance domain-specific expertise.
Reviews techniques, updates, and applications of RAG in NLP.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-augmented generation enhances LLMs with external knowledge.
Focuses on retriever techniques and retrieval fusion methods.
Provides tutorial codes for implementing RAG techniques.
πŸ”Ž Similar Papers
No similar papers found.