Ethical Risks in Deploying Large Language Models: An Evaluation of Medical Ethics Jailbreaking

πŸ“… 2026-01-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the vulnerability of current large language models to malicious prompts in Chinese medical ethics scenarios, where targeted safety evaluations are lacking. The authors propose the first jailbreaking attack evaluation framework specifically designed for high-risk Chinese medical ethical contexts, introducing a vector-based attack methodology that integrates role-playing, scenario simulation, and multi-turn dialogue. Leveraging the DeepInception framework, they conduct adversarial testing on seven mainstream models. A novel defense paradigm is introduced, incorporating process supervision, multi-factor identity verification, and cross-model collaborative protection. Furthermore, a hierarchical scoring matrix is developed to quantify both attack success rate (ASR) and its relative gain (ASR Gain). Experimental results reveal severe defense failures across most models, with an average ASR of 82.1% and five models exhibiting ASRs between 96% and 100%, while only Claude-Sonnet-4-Reasoning demonstrates relatively robust performance.

Technology Category

Application Category

πŸ“ Abstract
Background: While Large Language Models (LLMs) have achieved widespread adoption, malicious prompt engineering specifically"jailbreak attacks"poses severe security risks by inducing models to bypass internal safety mechanisms. Current benchmarks predominantly focus on public safety and Western cultural norms, leaving a critical gap in evaluating the niche, high-risk domain of medical ethics within the Chinese context. Objective: To establish a specialized jailbreak evaluation framework for Chinese medical ethics and to systematically assess the defensive resilience and ethical alignment of seven prominent LLMs when subjected to sophisticated adversarial simulations. Methodology: We evaluated seven prominent models (e.g., GPT-5, Claude-Sonnet-4-Reasoning, DeepSeek-R1) using a"role-playing + scenario simulation + multi-turn dialogue"vector within the DeepInception framework. The testing focused on eight high-risk themes, including commercial surrogacy and organ trading, utilizing a hierarchical scoring matrix to quantify the Attack Success Rate (ASR) and ASR Gain. Results: A systemic collapse of defenses was observed, whereas models demonstrated high baseline compliance, the jailbreak ASR reached 82.1%, representing an ASR Gain of over 80 percentage points. Claude-Sonnet-4-Reasoning emerged as the most robust model, while five models including Gemini-2.5-Pro and GPT-4.1 exhibited near-total failure with ASRs between 96% and 100%. Conclusions: Current LLMs are highly vulnerable to contextual manipulation in medical ethics, often prioritizing"helpfulness"over safety constraints. To enhance security, we recommend a transition from outcome to process supervision, the implementation of multi-factor identity verification, and the establishment of cross-model"joint defense"mechanisms.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Medical Ethics
Jailbreak Attacks
Ethical Risks
Chinese Context
Innovation

Methods, ideas, or system contributions that make the work stand out.

medical ethics jailbreaking
adversarial evaluation framework
contextual manipulation
process supervision
multi-turn dialogue attack
πŸ”Ž Similar Papers
No similar papers found.
C
Chutian Huang
Department of Central Laboratory, Shanghai Children’s Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China; School of Philosophy, Fudan University
D
Dake Cao
School of Philosophy, Fudan University
J
Jiacheng Ji
Institute of Technology Ethics for Human Future, Fudan University
Y
Yunlou Fan
School of Philosophy, Fudan University
C
Chengze Yan
School of Philosophy, Fudan University
Hanhui Xu
Hanhui Xu
Lecturer of Medical ethics, Nankai University
medical ethicsBioethics