🤖 AI Summary
In retrieval-augmented generation (RAG), effectively integrating external retrieved knowledge with large language models’ internal parametric knowledge remains challenging due to static or heuristic fusion strategies. Method: This paper proposes Self-Selection, a novel framework wherein the model autonomously selects the more accurate response between two candidates—one generated solely from internal knowledge and the other from fused internal and external knowledge—enabling dynamic, instance-level knowledge-source discrimination. It introduces direct preference optimization (DPO) to RAG for the first time, jointly optimizing generation and selection via end-to-end preference learning, and constructs a high-quality RGP (Retrieval-Guided Preference) dataset. Results: On Natural Questions and TriviaQA, Self-Selection significantly outperforms strong baselines using Llama2-13B-Chat and Mistral-7B, demonstrating improved response accuracy and robust knowledge integration. Its core innovation lies in replacing conventional knowledge concatenation with model-driven, dynamic knowledge-source selection and unified preference-based learning.
📝 Abstract
Retrieval-Augmented Generation (RAG), which integrates external knowledge into Large Language Models (LLMs), has proven effective in enabling LLMs to produce more accurate and reliable responses. However, it remains a significant challenge how to effectively integrate external retrieved knowledge with internal parametric knowledge in LLMs. In this work, we propose a novel Self-Selection RAG framework, where the LLM is made to select from pairwise responses generated with internal parametric knowledge solely and with external retrieved knowledge together to achieve enhanced accuracy. To this end, we devise a Self-Selection-RGP method to enhance the capabilities of the LLM in both generating and selecting the correct answer, by training the LLM with Direct Preference Optimization (DPO) over a curated Retrieval Generation Preference (RGP) dataset. Experimental results with two open-source LLMs (i.e., Llama2-13B-Chat and Mistral-7B) well demonstrate the superiority of our approach over other baseline methods on Natural Questions (NQ) and TrivialQA datasets.