🤖 AI Summary
This study addresses core ethical challenges associated with deploying generative AI (GenAI) for automated clinical patient messaging. Using qualitative methods, we conducted semi-structured interviews with 21 physicians participating in a GenAI pilot program, systematically identifying four critical ethical dimensions: the necessity of human oversight, transparency and informed consent regarding AI use, accurate perception of AI’s role, and robust privacy protection and data security. Contrary to prevailing assumptions, our findings establish that ethical accountability resides with AI users—not the technology itself—thereby challenging anthropomorphic attributions of responsibility. Based on this insight, we propose a practical, human–AI collaborative framework grounded in shared responsibility and clinical accountability. The study contributes actionable, ethics-informed guidance for the safe, trustworthy, and responsible integration of GenAI into clinical communication workflows.
📝 Abstract
The increasing burden of responding to large volumes of patient messages has become a key factor contributing to physician burnout. Generative AI (GenAI) shows great promise to alleviate this burden by automatically drafting patient message replies. The ethical implications of this use have however not been fully explored. To address this knowledge gap, we conducted a semi-structured interview study with 21 physicians who participated in a GenAI pilot program. We found that notable ethical considerations expressed by the physician participants included human oversight as ethical safeguard, transparency and patient consent of AI use, patient misunderstanding of AI's role, and patient privacy and data security as prerequisites. Additionally, our findings suggest that the physicians believe the ethical responsibility of using GenAI in this context primarily lies with users, not with the technology. These findings may provide useful insights into guiding the future implementation of GenAI in clinical practice.