LOLA - An Open-Source Massively Multilingual Large Language Model

📅 2024-09-17
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing linguistic diversity and computational efficiency in multilingual large language models (LLMs), this paper introduces LOLA, an open-source sparse Mixture-of-Experts (MoE) LLM supporting over 160 languages. Methodologically, we first discover that the expert routing mechanism inherently captures linguistic phylogeny—mitigating the “multilingual curse”—and further enhance it via curriculum-based multilingual data scheduling, language-aware tokenization, and routing regularization. Empirically, LOLA achieves state-of-the-art or near-state-of-the-art performance on multilingual understanding and generation benchmarks, significantly improving cross-lingual generalization and training scalability. All model weights, training configurations, and evaluation protocols are fully open-sourced to ensure reproducibility and facilitate efficient downstream adaptation.

Technology Category

Application Category

📝 Abstract
This paper presents LOLA, a massively multilingual large language model trained on more than 160 languages using a sparse Mixture-of-Experts Transformer architecture. Our architectural and implementation choices address the challenge of harnessing linguistic diversity while maintaining efficiency and avoiding the common pitfalls of multilinguality. Our analysis of the evaluation results shows competitive performance in natural language generation and understanding tasks. Additionally, we demonstrate how the learned expert-routing mechanism exploits implicit phylogenetic linguistic patterns to potentially alleviate the curse of multilinguality. We provide an in-depth look at the training process, an analysis of the datasets, and a balanced exploration of the model's strengths and limitations. As an open-source model, LOLA promotes reproducibility and serves as a robust foundation for future research. Our findings enable the development of compute-efficient multilingual models with strong, scalable performance across languages.
Problem

Research questions and friction points this paper is trying to address.

Multilingual Language Model
Open-Source
Natural Language Processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual Processing
Open-Source Architecture
Language Pattern Recognition
🔎 Similar Papers
No similar papers found.
N
Nikit Srivastava
Data Science Group, Paderborn University, Germany
D
Denis Kuchelev
T
Tatiana Moteu
Data Science Group, Paderborn University, Germany
K
Kshitij Shetty
M
Michael Roeder
Data Science Group, Paderborn University, Germany
Diego Moussallem
Diego Moussallem
Paderborn University
Natural Language GenerationMachine TranslationKnowledge GraphsNatural Language ProcessingReproducible Research
Hamada M. Zahera
Hamada M. Zahera
University of Paderborn
Natural Language ProcessingLarge Language ModelsKnowledge Graphs
A
A. Ngomo
Data Science Group, Paderborn University, Germany