When AI Writes Back: Ethical Considerations by Physicians on AI-Drafted Patient Message Replies

📅 2025-08-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses core ethical challenges associated with deploying generative AI (GenAI) for automated clinical patient messaging. Using qualitative methods, we conducted semi-structured interviews with 21 physicians participating in a GenAI pilot program, systematically identifying four critical ethical dimensions: the necessity of human oversight, transparency and informed consent regarding AI use, accurate perception of AI’s role, and robust privacy protection and data security. Contrary to prevailing assumptions, our findings establish that ethical accountability resides with AI users—not the technology itself—thereby challenging anthropomorphic attributions of responsibility. Based on this insight, we propose a practical, human–AI collaborative framework grounded in shared responsibility and clinical accountability. The study contributes actionable, ethics-informed guidance for the safe, trustworthy, and responsible integration of GenAI into clinical communication workflows.

Technology Category

Application Category

📝 Abstract
The increasing burden of responding to large volumes of patient messages has become a key factor contributing to physician burnout. Generative AI (GenAI) shows great promise to alleviate this burden by automatically drafting patient message replies. The ethical implications of this use have however not been fully explored. To address this knowledge gap, we conducted a semi-structured interview study with 21 physicians who participated in a GenAI pilot program. We found that notable ethical considerations expressed by the physician participants included human oversight as ethical safeguard, transparency and patient consent of AI use, patient misunderstanding of AI's role, and patient privacy and data security as prerequisites. Additionally, our findings suggest that the physicians believe the ethical responsibility of using GenAI in this context primarily lies with users, not with the technology. These findings may provide useful insights into guiding the future implementation of GenAI in clinical practice.
Problem

Research questions and friction points this paper is trying to address.

Investigating ethical implications of AI-drafted patient message replies
Exploring physician perspectives on AI use in clinical communication
Addressing oversight, transparency and privacy concerns in AI implementation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-structured interviews with physicians
Human oversight as ethical safeguard
Transparency and patient consent requirements
🔎 Similar Papers
No similar papers found.
D
Di Hu
University of California, Irvine, Irvine, CA, USA
Yawen Guo
Yawen Guo
PhD Candidate, University of California, Irvine
AI for HealthClinical InformaticsMachine LearningNatural Language Processing
Ha Na Cho
Ha Na Cho
University of California Irvine
AIHealthHCI
E
Emilie Chow
University of California, Irvine, Irvine, CA, USA
D
Dana B. Mukamel
University of California, Irvine, Irvine, CA, USA
D
Dara Sorkin
University of California, Irvine, Irvine, CA, USA
A
Andrew Reikes
University of California, Irvine, Irvine, CA, USA
D
Danielle Perret
University of California, Irvine, Irvine, CA, USA
D
Deepti Pandita
University of California, Irvine, Irvine, CA, USA
K
Kai Zheng
University of California, Irvine, Irvine, CA, USA