🤖 AI Summary
This study addresses the significant performance degradation of large language models (LLMs) on Arabic medical tasks, attributed to linguistic bias and subword tokenization fragmentation—particularly pronounced in complex tasks—and reveals a weak correlation between models’ self-reported confidence and actual correctness. Through cross-lingual comparative experiments, tokenization structure analysis, confidence calibration, and consistency evaluation of model explanations, the work systematically assesses mainstream LLMs on Arabic versus English medical question answering. It identifies, for the first time, subword fragmentation in Arabic tokenization as a critical bottleneck and highlights the absence of language-aware design in current models. These findings offer crucial directions for improving the reliability and equity of multilingual medical AI systems.
📝 Abstract
In recent years, Large Language Models (LLMs) have become widely used in medical applications, such as clinical decision support, medical education, and medical question answering. Yet, these models are often English-centric, limiting their robustness and reliability for linguistically diverse communities. Recent work has highlighted discrepancies in performance in low-resource languages for various medical tasks, but the underlying causes remain poorly understood. In this study, we conduct a cross-lingual empirical analysis of LLM performance on Arabic and English medical question and answering. Our findings reveal a persistent language-driven performance gap that intensifies with increasing task complexity. Tokenization analysis exposes structural fragmentation in Arabic medical text, while reliability analysis suggests that model-reported confidence and explanations exhibit limited correlation with correctness. Together, these findings underscore the need for language-aware design and evaluation strategies in LLMs for medical tasks.