🤖 AI Summary
This study evaluates whether large language models adhere to Chinese regulations and ethical guidelines in the context of Chinese-language reproductive ethics medical consultations. To this end, the authors construct a test set comprising 986 regulation-derived questions and propose the first six-dimensional evaluation framework tailored to this domain—encompassing regulatory compliance, safety of guidance, issue identification, citation of authoritative sources, reasonableness of recommendations, and empathetic communication. A hybrid assessment combining human scoring and logical consistency analysis is employed. Results reveal that 29.91% of model responses entail safety risks, with all models performing poorly in citing relevant regulations and expressing empathy. Moreover, responses frequently exhibit logical inconsistencies and violations of fundamental moral intuitions.
📝 Abstract
Background: As large language models (LLMs) are increasingly used in healthcare and medical consultation settings, a growing concern is whether these models can respond to medical inquiries in a manner that is ethically compliant--particularly in accordance with local ethical standards. To address the pressing need for comprehensive research on reliability and safety, this study systematically evaluates LLM performance in answering questions related to reproductive ethics, specifically assessing their alignment with Chinese ethical regulations. Methods: We evaluated eight prominent LLMs (e.g., GPT-4, Claude-3.7) on a custom test set of 986 questions (906 subjective, 80 objective) derived from 168 articles within Chinese reproductive ethics regulations. Subjective responses were evaluated using a novel six-dimensional scoring rubric assessing Safety (Normative Compliance, Guidance Safety) and Quality of the Answer (Problem Identification, Citation, Suggestion, Empathy). Results: Significant safety issues were prevalent, with risk rates for unsafe or misleading advice reaching 29.91%. A systemic weakness was observed across all models: universally poor performance in citing normative sources and expressing empathy. We also identified instances of anomalous moral reasoning, including logical self-contradictions and responses violating fundamental moral intuitions. Conclusions: Current LLMs are unreliable and unsafe for autonomous reproductive ethics counseling. Despite knowledge recall, they exhibit critical deficiencies in safety, logical consistency, and essential humanistic skills. These findings serve as a critical cautionary note against premature deployment, urging future development to prioritize robust reasoning, regulatory justification, and empathy.