Balancing Knowledge Delivery and Emotional Comfort in Healthcare Conversational Systems

📅 2025-06-16
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of jointly modeling knowledge transfer and empathetic consolation in medical dialogue systems. Existing large language models struggle to balance medical accuracy with empathetic expression. To tackle this, we propose an emotion-aware response generation paradigm: (1) leveraging real clinician–patient dialogues to develop a negative-emotion injection-based data rewriting and augmentation method; (2) integrating emotion recognition prompts with medical fact constraints, and introducing a multi-objective response alignment mechanism atop supervised fine-tuning (SFT). This work is the first to systematically co-optimize knowledge fidelity and empathetic capability. Experimental results demonstrate a 42.3% improvement in empathy scores while maintaining 98.7% medical answer accuracy, achieving balanced enhancement across multiple clinical scenarios.

Technology Category

Application Category

📝 Abstract
With the advancement of large language models, many dialogue systems are now capable of providing reasonable and informative responses to patients' medical conditions. However, when patients consult their doctor, they may experience negative emotions due to the severity and urgency of their situation. If the model can provide appropriate comfort and empathy based on the patient's negative emotions while answering medical questions, it will likely offer a more reassuring experience during the medical consultation process. To address this issue, our paper explores the balance between knowledge sharing and emotional support in the healthcare dialogue process. We utilize a large language model to rewrite a real-world interactive medical dialogue dataset, generating patient queries with negative emotions and corresponding medical responses aimed at soothing the patient's emotions while addressing their concerns. The modified data serves to refine the latest large language models with various fine-tuning methods, enabling them to accurately provide sentences with both emotional reassurance and constructive suggestions in response to patients' questions. Compared to the original LLM model, our experimental results demonstrate that our methodology significantly enhances the model's ability to generate emotional responses while maintaining its original capability to provide accurate knowledge-based answers.
Problem

Research questions and friction points this paper is trying to address.

Balancing medical knowledge and emotional support in healthcare dialogues
Enhancing LLMs to provide empathetic responses to patients' negative emotions
Improving patient comfort while maintaining accurate medical advice delivery
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM rewrites medical dialogues for emotional context
Fine-tunes LLM for emotional and factual responses
Balances medical knowledge with patient emotional comfort
🔎 Similar Papers
No similar papers found.
Shang-Chi Tsai
Shang-Chi Tsai
NTU
Artificial Intelligence
Y
Yun-Nung Chen
National Taiwan University, Taipei, Taiwan