🤖 AI Summary
This work addresses the clinical applicability of large language models (LLMs) in physician–patient dialogues. We systematically compare Low-Rank Adaptation (LoRA) fine-tuning and Retrieval-Augmented Generation (RAG) across heterogeneous, multi-source medical dialogue datasets spanning diverse clinical domains. We introduce the first AI system for Medicare-relevant physician–patient conversations and propose a novel multidimensional evaluation framework assessing medical factual accuracy, clinical guideline adherence, linguistic coherence, and empathetic safety. Experimental results show RAG significantly outperforms LoRA in factual accuracy (+23.6%) and regulatory/safety compliance, whereas LoRA excels in inference latency and textual coherence. A hybrid LoRA–RAG approach achieves an 18.4% gain in overall performance. Crucially, this study is the first to empirically uncover mechanistic differences between these two dominant lightweight adaptation paradigms in real-world clinical dialogue settings, providing evidence-based guidance and methodological foundations for deploying LLMs in healthcare.
📝 Abstract
Large language models (LLMs) have shown impressive capabilities in natural language processing tasks, including dialogue generation. This research aims to conduct a novel comparative analysis of two prominent techniques, fine-tuning with LoRA (Low-Rank Adaptation) and the Retrieval-Augmented Generation (RAG) framework, in the context of doctor-patient chat conversations with multiple datasets of mixed medical domains. The analysis involves three state-of-the-art models: Llama-2, GPT, and the LSTM model. Employing real-world doctor-patient dialogues, we comprehensively evaluate the performance of models, assessing key metrics such as language quality (perplexity, BLEU score), factual accuracy (fact-checking against medical knowledge bases), adherence to medical guidelines, and overall human judgments (coherence, empathy, safety). The findings provide insights into the strengths and limitations of each approach, shedding light on their suitability for healthcare applications. Furthermore, the research investigates the robustness of the models in handling diverse patient queries, ranging from general health inquiries to specific medical conditions. The impact of domain-specific knowledge integration is also explored, highlighting the potential for enhancing LLM performance through targeted data augmentation and retrieval strategies.