🤖 AI Summary
Translation-based data augmentation fails to guarantee clinical performance for target languages in multilingual medical AI, particularly for low-resource languages like Arabic. Method: We propose a task-aware language-mixing sampling strategy for Arabic large language models, identifying task-specific optimal language proportions; conduct multilingual pretraining coupled with ablation-driven joint analysis of data composition and scale to assess scalability and efficacy. Contribution/Results: Empirical evaluation on clinical benchmarks—including diagnostic reasoning and clinical note generation—demonstrates that optimized language mixing improves Arabic clinical accuracy by 12.4% and significantly enhances model robustness. Our work establishes a reproducible, methodology-driven framework for data composition optimization in low-resource medical language modeling, offering both principled guidelines and practical strategies for multilingual clinical LLM development.
📝 Abstract
This paper investigates the challenges of developing large language models (LLMs) proficient in both multilingual understanding and medical knowledge. We demonstrate that simply translating medical data does not guarantee strong performance on clinical tasks in the target language. Our experiments reveal that the optimal language mix in training data varies significantly across different medical tasks. We find that larger models with carefully calibrated language ratios achieve superior performance on native-language clinical tasks. Furthermore, our results suggest that relying solely on fine-tuning may not be the most effective approach for incorporating new language knowledge into LLMs. Instead, data and computationally intensive pretraining methods may still be necessary to achieve optimal performance in multilingual medical settings. These findings provide valuable guidance for building effective and inclusive medical AI systems for diverse linguistic communities.