Conversation AI Dialog for Medicare powered by Finetuning and Retrieval Augmented Generation

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the clinical applicability of large language models (LLMs) in physician–patient dialogues. We systematically compare Low-Rank Adaptation (LoRA) fine-tuning and Retrieval-Augmented Generation (RAG) across heterogeneous, multi-source medical dialogue datasets spanning diverse clinical domains. We introduce the first AI system for Medicare-relevant physician–patient conversations and propose a novel multidimensional evaluation framework assessing medical factual accuracy, clinical guideline adherence, linguistic coherence, and empathetic safety. Experimental results show RAG significantly outperforms LoRA in factual accuracy (+23.6%) and regulatory/safety compliance, whereas LoRA excels in inference latency and textual coherence. A hybrid LoRA–RAG approach achieves an 18.4% gain in overall performance. Crucially, this study is the first to empirically uncover mechanistic differences between these two dominant lightweight adaptation paradigms in real-world clinical dialogue settings, providing evidence-based guidance and methodological foundations for deploying LLMs in healthcare.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown impressive capabilities in natural language processing tasks, including dialogue generation. This research aims to conduct a novel comparative analysis of two prominent techniques, fine-tuning with LoRA (Low-Rank Adaptation) and the Retrieval-Augmented Generation (RAG) framework, in the context of doctor-patient chat conversations with multiple datasets of mixed medical domains. The analysis involves three state-of-the-art models: Llama-2, GPT, and the LSTM model. Employing real-world doctor-patient dialogues, we comprehensively evaluate the performance of models, assessing key metrics such as language quality (perplexity, BLEU score), factual accuracy (fact-checking against medical knowledge bases), adherence to medical guidelines, and overall human judgments (coherence, empathy, safety). The findings provide insights into the strengths and limitations of each approach, shedding light on their suitability for healthcare applications. Furthermore, the research investigates the robustness of the models in handling diverse patient queries, ranging from general health inquiries to specific medical conditions. The impact of domain-specific knowledge integration is also explored, highlighting the potential for enhancing LLM performance through targeted data augmentation and retrieval strategies.
Problem

Research questions and friction points this paper is trying to address.

Compare fine-tuning and RAG for medical dialogues
Evaluate model performance using real-world doctor-patient conversations
Assess robustness with diverse patient queries and medical conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning with LoRA
Retrieval-Augmented Generation framework
Comparative analysis of LLMs
🔎 Similar Papers
No similar papers found.
Atharva Mangeshkumar Agrawal
Atharva Mangeshkumar Agrawal
University of Florida | VIT University
Machine LearningArtificial IntelligenceDeep LearningNLPComputer Vision
Rutika Pandurang Shinde
Rutika Pandurang Shinde
Student, University of Florida
NLPMachine LearningArtificial Intelligence
V
Vasanth Kumar Bhukya
National Institute of Technology Calicut
A
Ashmita Chakraborty
SRM, Chennai
S
Sagar Bharat Shah
University of Cincinnati
T
Tanmay Shukla
Dartmouth College
S
Sree Pradeep Kumar Relangi
Arizona State University
N
Nilesh Mutyam
Arizona State University