🤖 AI Summary
This study addresses the limited performance of existing large language models in low-resource language medical settings, which hinders equitable global deployment of AI-driven healthcare. To bridge this gap, the authors introduce GlobMed, the first multilingual medical dataset spanning 12 languages—including four low-resource ones—and its accompanying evaluation benchmark, GlobMed-Bench. Building on this foundation, they develop GlobMed-LLMs, a family of multilingual medical large language models with parameter scales ranging from 1.7B to 8B. Experimental results demonstrate that GlobMed-LLMs achieve an average performance improvement of over 40% compared to baseline models, with gains exceeding threefold on low-resource languages. These advances substantially narrow the performance disparity across languages, laying a critical foundation for fair and effective multilingual medical AI applications worldwide.
📝 Abstract
Despite continuous advances in medical technology, the global distribution of health care resources remains uneven. The development of large language models (LLMs) has transformed the landscape of medicine and holds promise for improving health care quality and expanding access to medical information globally. However, existing LLMs are primarily trained on high-resource languages, limiting their applicability in global medical scenarios. To address this gap, we constructed GlobMed, a large multilingual medical dataset, containing over 500,000 entries spanning 12 languages, including four low-resource languages. Building on this, we established GlobMed-Bench, which systematically assesses 56 state-of-the-art proprietary and open-weight LLMs across multiple multilingual medical tasks, revealing significant performance disparities across languages, particularly for low-resource languages. Additionally, we introduced GlobMed-LLMs, a suite of multilingual medical LLMs trained on GlobMed, with parameters ranging from 1.7B to 8B. GlobMed-LLMs achieved an average performance improvement of over 40% relative to baseline models, with a more than threefold increase in performance on low-resource languages. Together, these resources provide an important foundation for advancing the equitable development and application of LLMs globally, enabling broader language communities to benefit from technological advances.