🤖 AI Summary
This study addresses persistent challenges in Yorùbá natural language processing (NLP), including severe resource scarcity, difficulties in tonal modeling, limited annotated corpora, and poor transferability of multilingual models. Following the PRISMA guidelines, we conduct a systematic literature review (2014–2024) covering 105 empirical studies, employing structured cross-database searches (ACM, IEEE, Springer) and thematic coding analysis. Our work yields the first comprehensive decade-long landscape of Yorùbá NLP research, uncovering sociolinguistic barriers such as pervasive code-switching and digital language abandonment. Key contributions include: (1) proposing an evaluation framework tailored to African low-resource languages; (2) establishing a reusable taxonomy of challenges and a curated resource map; and (3) empirically demonstrating the near-absence of pretrained Yorùbá language models, the continued dominance of rule-based approaches, and the emergent—though still limited—potential of cross-lingual transfer learning.
📝 Abstract
Natural Language Processing (NLP) is becoming a dominant subset of artificial intelligence as the need to help machines understand human language looks indispensable. Several NLP applications are ubiquitous, partly due to the myriads of datasets being churned out daily through mediums like social networking sites. However, the growing development has not been evident in most African languages due to the persisting resource limitation, among other issues. Yor`ub'a language, a tonal and morphologically rich African language, suffers a similar fate, resulting in limited NLP usage. To encourage further research towards improving this situation, this systematic literature review aims to comprehensively analyse studies addressing NLP development for Yor`ub'a, identifying challenges, resources, techniques, and applications. A well-defined search string from a structured protocol was employed to search, select, and analyse 105 primary studies between 2014 and 2024 from reputable databases. The review highlights the scarcity of annotated corpora, limited availability of pre-trained language models, and linguistic challenges like tonal complexity and diacritic dependency as significant obstacles. It also revealed the prominent techniques, including rule-based methods, among others. The findings reveal a growing body of multilingual and monolingual resources, even though the field is constrained by socio-cultural factors such as code-switching and desertion of language for digital usage. This review synthesises existing research, providing a foundation for advancing NLP for Yor`ub'a and in African languages generally. It aims to guide future research by identifying gaps and opportunities, thereby contributing to the broader inclusion of Yor`ub'a and other under-resourced African languages in global NLP advancements.