JMedEthicBench: A Multi-Turn Conversational Benchmark for Evaluating Medical Safety in Japanese Large Language Models

πŸ“… 2026-01-04
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the lack of multilingual, multi-turn safety evaluation benchmarks for large language models (LLMs) in domain-specific contexts by introducing the first Japanese multi-turn medical dialogue safety benchmark, grounded in the Japan Medical Association’s 67 ethical guidelines. The authors automatically generate over 50,000 adversarial dialogue samples and propose a multi-turn adversarial safety evaluation framework employing seven automated jailbreaking strategies and a dual-LLM scoring protocol. Evaluations across 27 models reveal that commercial models exhibit greater safety robustness compared to specialized medical models, which are more vulnerable to attacks. Safety scores drop significantly during multi-turn interactions (median score decreasing from 9.5 to 5.0, p<0.001), with vulnerabilities showing cross-lingual consistency, thereby uncovering a novel risk: domain-specific fine-tuning may inadvertently weaken inherent safety mechanisms.

Technology Category

Application Category

πŸ“ Abstract
As Large Language Models (LLMs) are increasingly deployed in healthcare field, it becomes essential to carefully evaluate their medical safety before clinical use. However, existing safety benchmarks remain predominantly English-centric, and test with only single-turn prompts despite multi-turn clinical consultations. To address these gaps, we introduce JMedEthicBench, the first multi-turn conversational benchmark for evaluating medical safety of LLMs for Japanese healthcare. Our benchmark is based on 67 guidelines from the Japan Medical Association and contains over 50,000 adversarial conversations generated using seven automatically discovered jailbreak strategies. Using a dual-LLM scoring protocol, we evaluate 27 models and find that commercial models maintain robust safety while medical-specialized models exhibit increased vulnerability. Furthermore, safety scores decline significantly across conversation turns (median: 9.5 to 5.0, $p<0.001$). Cross-lingual evaluation on both Japanese and English versions of our benchmark reveals that medical model vulnerabilities persist across languages, indicating inherent alignment limitations rather than language-specific factors. These findings suggest that domain-specific fine-tuning may accidentally weaken safety mechanisms and that multi-turn interactions represent a distinct threat surface requiring dedicated alignment strategies.
Problem

Research questions and friction points this paper is trying to address.

medical safety
large language models
multi-turn conversation
Japanese healthcare
safety benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-turn conversation
medical safety benchmark
adversarial jailbreak
cross-lingual evaluation
LLM alignment
πŸ”Ž Similar Papers
No similar papers found.