🤖 AI Summary
To address hallucination in biomedical RAG systems caused by retrieval noise and insufficient evidence verification, this paper proposes MedTrust to enhance factual consistency and traceability of answers. Methodologically: (1) it introduces citation-aware reasoning with structured negative knowledge assertions to handle missing evidence; (2) it designs an iterative retrieval-verification agent that dynamically refines queries based on medical gap analysis; and (3) it constructs the MedTrust-Align module, integrating positive and negative preference signals via Direct Preference Optimization (DPO) to suppress hallucination generation. Evaluated on MedMCQA, MedQA, and MMLU-Med benchmarks, MedTrust significantly outperforms mainstream baselines—achieving average accuracy improvements of 2.7% over LLaMA3.1-8B-Instruct and 2.4% over Qwen3-8B—demonstrating its effectiveness for trustworthy biomedical question answering.
📝 Abstract
Biomedical question answering (QA) requires accurate interpretation of complex medical knowledge. Large language models (LLMs) have shown promising capabilities in this domain, with retrieval-augmented generation (RAG) systems enhancing performance by incorporating external medical literature. However, RAG-based approaches in biomedical QA suffer from hallucinations due to post-retrieval noise and insufficient verification of retrieved evidence, undermining response reliability. We propose MedTrust-Guided Iterative RAG, a framework designed to enhance factual consistency and mitigate hallucinations in medical QA. Our method introduces three key innovations. First, it enforces citation-aware reasoning by requiring all generated content to be explicitly grounded in retrieved medical documents, with structured Negative Knowledge Assertions used when evidence is insufficient. Second, it employs an iterative retrieval-verification process, where a verification agent assesses evidence adequacy and refines queries through Medical Gap Analysis until reliable information is obtained. Third, it integrates the MedTrust-Align Module (MTAM) that combines verified positive examples with hallucination-aware negative samples, leveraging Direct Preference Optimization to reinforce citation-grounded reasoning while penalizing hallucination-prone response patterns. Experiments on MedMCQA, MedQA, and MMLU-Med demonstrate that our approach consistently outperforms competitive baselines across multiple model architectures, achieving the best average accuracy with gains of 2.7% for LLaMA3.1-8B-Instruct and 2.4% for Qwen3-8B.