Lugha-Llama: Adapting Large Language Models for African Languages

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the underrepresentation of low-resource African languages in large language models (LLMs), this paper proposes a lightweight, reproducible multilingual collaborative training paradigm. Methodologically, it introduces supervised fine-tuning (SFT), cross-lingual transfer analysis, and ablation studies on Swahili translation. Crucially, it is the first work to systematically validate the dominant role of high-quality English educational texts in enhancing African language capabilities—leading to a hybrid training strategy that combines curated African-language corpora with English educational data. Evaluated on IrokoBench (AfriMMLU) and AfriQA benchmarks, the resulting models significantly outperform same-scale baselines, achieving over a 10% absolute accuracy gain on AfriQA. All models and datasets are publicly released, establishing a scalable technical pathway for adapting LLMs to low-resource African languages.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved impressive results in a wide range of natural language applications. However, they often struggle to recognize low-resource languages, in particular African languages, which are not well represented in large training corpora. In this paper, we consider how to adapt LLMs to low-resource African languages. We find that combining curated data from African languages with high-quality English educational texts results in a training mix that substantially improves the model's performance on these languages. On the challenging IrokoBench dataset, our models consistently achieve the best performance amongst similarly sized baselines, particularly on knowledge-intensive multiple-choice questions (AfriMMLU). Additionally, on the cross-lingual question answering benchmark AfriQA, our models outperform the base model by over 10%. To better understand the role of English data during training, we translate a subset of 200M tokens into Swahili language and perform an analysis which reveals that the content of these data is primarily responsible for the strong performance. We release our models and data to encourage future research on African languages.
Problem

Research questions and friction points this paper is trying to address.

Adapting LLMs for low-resource African languages
Improving model performance using curated African language data
Analyzing impact of English data on cross-lingual benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining African language data with English texts
Improving performance on low-resource African languages
Analyzing impact of English data via Swahili translation
🔎 Similar Papers
No similar papers found.