MedErrBench: A Fine-Grained Multilingual Benchmark for Medical Error Detection and Correction with Clinical Expert Annotations

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical gap in evaluating medical large language models (LLMs) for clinically significant errors—such as misdiagnoses or inappropriate treatment recommendations—particularly in multilingual settings. We introduce the first expert-guided benchmark for detecting, localizing, and correcting medical errors across English, Arabic, and Chinese, grounded in an expanded ten-category error taxonomy. Clinical relevance is ensured through a collaborative annotation process involving medical professionals and real-world case collection. Systematic evaluation of general-purpose, language-specific, and medical-domain LLMs reveals substantial performance degradation in non-English contexts, underscoring the vital role of both linguistic competence and clinical awareness in ensuring the safety of medical AI systems. The dataset and evaluation protocol are publicly released to support equitable and safe global development of healthcare AI.

Technology Category

Application Category

📝 Abstract
Inaccuracies in existing or generated clinical text may lead to serious adverse consequences, especially if it is a misdiagnosis or incorrect treatment suggestion. With Large Language Models (LLMs) increasingly being used across diverse healthcare applications, comprehensive evaluation through dedicated benchmarks is crucial. However, such datasets remain scarce, especially across diverse languages and contexts. In this paper, we introduce MedErrBench, the first multilingual benchmark for error detection, localization, and correction, developed under the guidance of experienced clinicians. Based on an expanded taxonomy of ten common error types, MedErrBench covers English, Arabic and Chinese, with natural clinical cases annotated and reviewed by domain experts. We assessed the performance of a range of general-purpose, language-specific, and medical-domain language models across all three tasks. Our results reveal notable performance gaps, particularly in non-English settings, highlighting the need for clinically grounded, language-aware systems. By making MedErrBench and our evaluation protocols publicly-available, we aim to advance multilingual clinical NLP to promote safer and more equitable AI-based healthcare globally. The dataset is available in the supplementary material. An anonymized version of the dataset is available at: https://github.com/congboma/MedErrBench.
Problem

Research questions and friction points this paper is trying to address.

medical error detection
multilingual benchmark
clinical text accuracy
LLM evaluation
healthcare safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

multilingual medical NLP
medical error detection
clinical expert annotation
LLM evaluation benchmark
error correction
🔎 Similar Papers
No similar papers found.
Congbo Ma
Congbo Ma
New York University Abu Dhabi
Natural Language ProcessingMachine Learning
Y
Yichun Zhang
New York University
Y
Yousef Al-Jazzazi
New York University Abu Dhabi
A
Ahamed Foisal
New York University Abu Dhabi
L
Laasya Sharma
University of Birmingham
Y
Yousra Sadqi
Cleveland Clinic Abu Dhabi
K
Khaled Saleh
Cleveland Clinic Abu Dhabi
J
Jihad Mallat
Cleveland Clinic Abu Dhabi
F
Farah E. Shamout
New York University Abu Dhabi