Trustworthy Medical Question Answering: An Evaluation-Centric Survey

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address six critical trustworthiness challenges—factual consistency, robustness, fairness, safety, explainability, and calibration—arising from large language model (LLM) deployment in medical question answering (QA), this work proposes the first unified evaluation framework covering all dimensions, establishing an assessment-centric research paradigm. Methodologically, it integrates retrieval-augmented grounding, adversarial fine-tuning, and safety alignment techniques, and introduces a multidimensional joint metric system for systematic benchmarking on MedMCQA, TruthfulQA-Med, and SafeMed. Key contributions include: (i) the first comprehensive taxonomy of medical QA trustworthiness evaluation techniques; (ii) identification and formal definition of key open challenges, notably scalable expert evaluation; and (iii) methodological foundations and practical pathways enabling safe, reliable, and transparent clinical deployment of LLMs.

Technology Category

Application Category

📝 Abstract
Trustworthiness in healthcare question-answering (QA) systems is important for ensuring patient safety, clinical effectiveness, and user confidence. As large language models (LLMs) become increasingly integrated into medical settings, the reliability of their responses directly influences clinical decision-making and patient outcomes. However, achieving comprehensive trustworthiness in medical QA poses significant challenges due to the inherent complexity of healthcare data, the critical nature of clinical scenarios, and the multifaceted dimensions of trustworthy AI. In this survey, we systematically examine six key dimensions of trustworthiness in medical QA, i.e., Factuality, Robustness, Fairness, Safety, Explainability, and Calibration. We review how each dimension is evaluated in existing LLM-based medical QA systems. We compile and compare major benchmarks designed to assess these dimensions and analyze evaluation-guided techniques that drive model improvements, such as retrieval-augmented grounding, adversarial fine-tuning, and safety alignment. Finally, we identify open challenges-such as scalable expert evaluation, integrated multi-dimensional metrics, and real-world deployment studies-and propose future research directions to advance the safe, reliable, and transparent deployment of LLM-powered medical QA.
Problem

Research questions and friction points this paper is trying to address.

Evaluating trustworthiness in medical QA systems for patient safety
Assessing six key dimensions of trustworthy AI in healthcare
Identifying challenges for reliable LLM deployment in clinical settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-augmented grounding enhances factuality
Adversarial fine-tuning improves robustness
Safety alignment ensures clinical reliability
🔎 Similar Papers
No similar papers found.