W-RAG: Weakly Supervised Dense Retrieval in RAG for Open-domain Question Answering

📅 2024-08-15
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Dense retrievers for open-domain question answering (OpenQA) suffer from limited training data due to the scarcity of human-annotated supporting evidence. Method: This paper proposes a weakly supervised training framework leveraging large language models (LLMs): pseudo-positive passages are constructed by combining BM25 initial retrieval results with LLM-generated conditional answer probabilities, and the dense retriever is optimized end-to-end via contrastive learning. Contribution/Results: To our knowledge, this is the first work to employ LLM-generated confidence scores as scalable, annotation-free weak supervision signals—eliminating reliance on manual evidence labels. Evaluated on four standard OpenQA benchmarks, our method achieves retrieval accuracy and end-to-end QA performance competitive with fully supervised baselines, demonstrating both effectiveness and strong generalization across diverse domains.

Technology Category

Application Category

📝 Abstract
In knowledge-intensive tasks such as open-domain question answering (OpenQA), large language models (LLMs) often struggle to generate factual answers, relying solely on their internal (parametric) knowledge. To address this limitation, Retrieval-Augmented Generation (RAG) systems enhance LLMs by retrieving relevant information from external sources, thereby positioning the retriever as a pivotal component. Although dense retrieval demonstrates state-of-the-art performance, its training poses challenges due to the scarcity of ground-truth evidence, largely attributed to the high costs of human annotation. In this paper, we propose W-RAG, a method that draws weak training signals from the downstream task (such as OpenQA) of an LLM, and fine-tunes the retriever to prioritize passages that most benefit the task. Specifically, we rerank the top-$k$ passages retrieved via BM25 by assessing the probability that the LLM will generate the correct answer for a question given each passage. The highest-ranking passages are then used as positive fine-tuning examples for dense retrieval. We conduct comprehensive experiments across four publicly available OpenQA datasets to demonstrate that our approach enhances both retrieval and OpenQA performance compared to baseline models, achieving results comparable to models fine-tuned with human-labeled data.
Problem

Research questions and friction points this paper is trying to address.

Addresses scarcity of ground-truth evidence for dense retrieval training
Enhances retrieval performance without human-labeled supervision
Improves factual answer generation in open-domain question answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses weakly supervised signals from LLM downstream tasks
Reranks BM25 passages via LLM answer probability
Fine-tunes dense retriever with top-ranking positive examples
🔎 Similar Papers
No similar papers found.
J
Jinming Nian
Santa Clara University, Santa Clara, CA, USA
Z
Zhiyuan Peng
Santa Clara University, Santa Clara, CA, USA
Q
Qifan Wang
Meta AI, Menlo Park, CA, USA
Y
Yi Fang
Santa Clara University, Santa Clara, CA, USA