Cross-Lingual Empirical Evaluation of Large Language Models for Arabic Medical Tasks

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the significant performance degradation of large language models (LLMs) on Arabic medical tasks, attributed to linguistic bias and subword tokenization fragmentation—particularly pronounced in complex tasks—and reveals a weak correlation between models’ self-reported confidence and actual correctness. Through cross-lingual comparative experiments, tokenization structure analysis, confidence calibration, and consistency evaluation of model explanations, the work systematically assesses mainstream LLMs on Arabic versus English medical question answering. It identifies, for the first time, subword fragmentation in Arabic tokenization as a critical bottleneck and highlights the absence of language-aware design in current models. These findings offer crucial directions for improving the reliability and equity of multilingual medical AI systems.

Technology Category

Application Category

📝 Abstract
In recent years, Large Language Models (LLMs) have become widely used in medical applications, such as clinical decision support, medical education, and medical question answering. Yet, these models are often English-centric, limiting their robustness and reliability for linguistically diverse communities. Recent work has highlighted discrepancies in performance in low-resource languages for various medical tasks, but the underlying causes remain poorly understood. In this study, we conduct a cross-lingual empirical analysis of LLM performance on Arabic and English medical question and answering. Our findings reveal a persistent language-driven performance gap that intensifies with increasing task complexity. Tokenization analysis exposes structural fragmentation in Arabic medical text, while reliability analysis suggests that model-reported confidence and explanations exhibit limited correlation with correctness. Together, these findings underscore the need for language-aware design and evaluation strategies in LLMs for medical tasks.
Problem

Research questions and friction points this paper is trying to address.

Cross-lingual
Large Language Models
Arabic
Medical Question Answering
Low-resource languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-lingual evaluation
Arabic medical NLP
tokenization fragmentation
LLM reliability
language-aware design
🔎 Similar Papers
No similar papers found.