Ethical Risks of Large Language Models in Medical Consultation: An Assessment Based on Reproductive Ethics

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study evaluates whether large language models adhere to Chinese regulations and ethical guidelines in the context of Chinese-language reproductive ethics medical consultations. To this end, the authors construct a test set comprising 986 regulation-derived questions and propose the first six-dimensional evaluation framework tailored to this domain—encompassing regulatory compliance, safety of guidance, issue identification, citation of authoritative sources, reasonableness of recommendations, and empathetic communication. A hybrid assessment combining human scoring and logical consistency analysis is employed. Results reveal that 29.91% of model responses entail safety risks, with all models performing poorly in citing relevant regulations and expressing empathy. Moreover, responses frequently exhibit logical inconsistencies and violations of fundamental moral intuitions.

Technology Category

Application Category

📝 Abstract
Background: As large language models (LLMs) are increasingly used in healthcare and medical consultation settings, a growing concern is whether these models can respond to medical inquiries in a manner that is ethically compliant--particularly in accordance with local ethical standards. To address the pressing need for comprehensive research on reliability and safety, this study systematically evaluates LLM performance in answering questions related to reproductive ethics, specifically assessing their alignment with Chinese ethical regulations. Methods: We evaluated eight prominent LLMs (e.g., GPT-4, Claude-3.7) on a custom test set of 986 questions (906 subjective, 80 objective) derived from 168 articles within Chinese reproductive ethics regulations. Subjective responses were evaluated using a novel six-dimensional scoring rubric assessing Safety (Normative Compliance, Guidance Safety) and Quality of the Answer (Problem Identification, Citation, Suggestion, Empathy). Results: Significant safety issues were prevalent, with risk rates for unsafe or misleading advice reaching 29.91%. A systemic weakness was observed across all models: universally poor performance in citing normative sources and expressing empathy. We also identified instances of anomalous moral reasoning, including logical self-contradictions and responses violating fundamental moral intuitions. Conclusions: Current LLMs are unreliable and unsafe for autonomous reproductive ethics counseling. Despite knowledge recall, they exhibit critical deficiencies in safety, logical consistency, and essential humanistic skills. These findings serve as a critical cautionary note against premature deployment, urging future development to prioritize robust reasoning, regulatory justification, and empathy.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Medical Consultation
Reproductive Ethics
Ethical Compliance
AI Safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

large language models
reproductive ethics
ethical evaluation framework
normative compliance
empathy in AI
🔎 Similar Papers
No similar papers found.
Hanhui Xu
Hanhui Xu
Lecturer of Medical ethics, Nankai University
medical ethicsBioethics
J
Jiacheng Ji
Institute of Technology Ethics for Human Future, Fudan University
H
Haoan Jin
X-LANCE Lab, Dept. of Computer Science and Engineering, Shanghai Jiao Tong University
H
Han Ying
Antgroup
Mengyue Wu
Mengyue Wu
Shanghai Jiao Tong University
Speech perception and productionaffective computingaudio cognition