🤖 AI Summary
To address information overload and knowledge fragmentation in clinical question answering for Long COVID, this work proposes Guide-RAG: a novel retrieval-augmented generation framework that synergistically retrieves dual-level evidence—clinical practice guidelines and high-quality systematic reviews—to balance consensus-based recommendations with granular empirical evidence. We design an LLM-as-a-judge evaluation framework assessing answer fidelity, relevance, and comprehensiveness, and release LongCOVID-CQ, the first expert-annotated QA dataset for Long COVID clinical questions. Experiments demonstrate that RAG configurations integrating both guidelines and systematic reviews significantly outperform single-source baselines (e.g., guidelines-only or large-scale literature corpora), achieving substantial improvements in answer accuracy and clinical utility. Our core contribution is a principled clinical RAG paradigm for emerging diseases—rigorously balancing authoritative guidance with deep, evidence-based reasoning—thereby advancing trustworthy, clinically actionable AI assistance.
📝 Abstract
As AI chatbots gain adoption in clinical medicine, developing effective frameworks for complex, emerging diseases presents significant challenges. We developed and evaluated six Retrieval-Augmented Generation (RAG) corpus configurations for Long COVID (LC) clinical question answering, ranging from expert-curated sources to large-scale literature databases. Our evaluation employed an LLM-as-a-judge framework across faithfulness, relevance, and comprehensiveness metrics using LongCOVID-CQ, a novel dataset of expert-generated clinical questions. Our RAG corpus configuration combining clinical guidelines with high-quality systematic reviews consistently outperformed both narrow single-guideline approaches and large-scale literature databases. Our findings suggest that for emerging diseases, retrieval grounded in curated secondary reviews provides an optimal balance between narrow consensus documents and unfiltered primary literature, supporting clinical decision-making while avoiding information overload and oversimplified guidance. We propose Guide-RAG, a chatbot system and accompanying evaluation framework that integrates both curated expert knowledge and comprehensive literature databases to effectively answer LC clinical questions.