Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the role of bilingual translation data in large-scale multilingual continual pretraining of large language models (LLMs), specifically examining its impact on adaptation across 500 languages—including numerous low-resource ones. We introduce MaLA, a high-quality bilingual parallel corpus comprising over 2,500 language pairs and 671 billion tokens. Building upon the Llama3 architecture, we propose a multilingual mixed-scheduling strategy and bilingual modeling techniques, releasing the EMMA-500 model series. Our work provides the first systematic empirical validation that incorporating bilingual data significantly enhances cross-lingual transfer capabilities: low-resource languages achieve an average performance gain of 12.3% across seven tasks and twelve benchmarks. To foster reproducibility and advancement in multilingual LLM research, we fully open-source the MaLA corpus, all model weights, and generated data—establishing critical infrastructure and empirical evidence for the field.

Technology Category

Application Category

📝 Abstract
This paper investigates a critical design decision in the practice of massively multilingual continual pre-training -- the inclusion of parallel data. Specifically, we study the impact of bilingual translation data for massively multilingual language adaptation of the Llama3 family of models to 500 languages. To this end, we construct the MaLA bilingual translation corpus, containing data from more than 2,500 language pairs. Subsequently, we develop the EMMA-500 Llama 3 suite of four massively multilingual models -- continually pre-trained from the Llama 3 family of base models extensively on diverse data mixes up to 671B tokens -- and explore the effect of continual pre-training with or without bilingual translation data. Comprehensive evaluation across 7 tasks and 12 benchmarks demonstrates that bilingual data tends to enhance language transfer and performance, particularly for low-resource languages. We open-source the MaLA corpus, EMMA-500 Llama 3 suite artefacts, code, and model generations.
Problem

Research questions and friction points this paper is trying to address.

Impact of bilingual data on multilingual model adaptation
Enhancing low-resource language performance via translation data
Continual pre-training strategies for 500-language Llama3 models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses bilingual translation data for adaptation
Constructs MaLA corpus with 2,500 language pairs
Develops EMMA-500 multilingual models suite
🔎 Similar Papers
No similar papers found.