π€ AI Summary
To address the growing risk of health misinformation, this paper proposes a three-stage Retrieval-Augmented Generation (RAG) framework that jointly optimizes topical relevance and factual accuracy. First, scientific literature passages are retrieved; second, a large language model generates interpretable reference texts (GenText); third, a dual-dimension re-ranking is performed by integrating stance detection and semantic similarity. Crucially, the framework unifies quantitative factual accuracy assessment with explainable text generation directly within the RAG pipeline, enabling dynamic, traceable retrieval optimization. Evaluated on multiple health information benchmark datasets, the method achieves a 12.3% improvement in topical relevance and an 18.7% gain in factual accuracy over state-of-the-art baselines. This work establishes a novel paradigm for trustworthy health information retrieval, advancing both reliability and transparency in evidence-based health communication.
π Abstract
The exponential surge in online health information, coupled with its increasing use by non-experts, highlights the pressing need for advanced Health Information Retrieval models that consider not only topical relevance but also the factual accuracy of the retrieved information, given the potential risks associated with health misinformation. To this aim, this paper introduces a solution driven by Retrieval-Augmented Generation (RAG), which leverages the capabilities of generative Large Language Models (LLMs) to enhance the retrieval of health-related documents grounded in scientific evidence. In particular, we propose a three-stage model: in the first stage, the user's query is employed to retrieve topically relevant passages with associated references from a knowledge base constituted by scientific literature. In the second stage, these passages, alongside the initial query, are processed by LLMs to generate a contextually relevant rich text (GenText). In the last stage, the documents to be retrieved are evaluated and ranked both from the point of view of topical relevance and factual accuracy by means of their comparison with GenText, either through stance detection or semantic similarity. In addition to calculating factual accuracy, GenText can offer a layer of explainability for it, aiding users in understanding the reasoning behind the retrieval. Experimental evaluation of our model on benchmark datasets and against baseline models demonstrates its effectiveness in enhancing the retrieval of both topically relevant and factually accurate health information, thus presenting a significant step forward in the health misinformation mitigation problem.