🤖 AI Summary
This work addresses the challenge that existing large language models (LLMs) struggle to generate effective and relevant follow-up questions in medical pre-diagnosis due to insufficient domain-specific knowledge. To overcome this limitation, the authors propose a novel approach that integrates structured medical knowledge graphs directly into the LLM inference process, enabling seamless knowledge infusion during generation. The method further incorporates an active in-context learning mechanism to guide the model toward clinically meaningful inquiries. Evaluated on standard benchmarks, the proposed framework significantly improves the accuracy of symptom-focused follow-up questioning, achieving a 5%–8% absolute gain in recall over state-of-the-art baselines. These results demonstrate the effectiveness and innovation of knowledge-driven follow-up question generation in medical natural language processing tasks.
📝 Abstract
Clinical diagnosis is time-consuming, requiring intensive interactions between patients and medical professionals. While large language models (LLMs) could ease the pre-diagnostic workload, their limited domain knowledge hinders effective medical question generation. We introduce a Knowledge Graph-augmented LLM with active in-context learning to generate relevant and important follow-up questions, KG-Followup, serving as a critical module for the pre-diagnostic assessment. The structured medical domain knowledge graph serves as a seamless patch-up to provide professional domain expertise upon which the LLM can reason. Experiments demonstrate that KG-Followup outperforms state-of-the-art methods by 5% - 8% on relevant benchmarks in recall.