🤖 AI Summary
Large language models (LLMs) lack systematic evaluation on low-resource languages (LRLs). Method: We introduce the first high-quality, multi-task understanding benchmark for Latvian and Kikuyu (a Kenyan LRL), adapting the MMLU framework via multilingual extension—localizing test items with native speakers and designing culturally grounded subsets; we employ a zero-shot cross-lingual evaluation protocol across eight state-of-the-art closed- and open-source LLMs. Contributions/Results: This work presents the first systematic LLM evaluation for Kikuyu. It reveals a stark performance gap across LRLs: top open-weight models (e.g., Mistral-large, Llama-70B Instruct) score below 36%, while OpenAI o1 achieves 88.8% and 70.8% on Latvian and Kikuyu, respectively. Crucially, ablation confirms that cultural localization is indispensable for accurate LRL capability assessment—naïve translation yields significantly inflated and misleading scores.
📝 Abstract
As large language models (LLMs) rapidly advance, evaluating their performance is critical. LLMs are trained on multilingual data, but their reasoning abilities are mainly evaluated using English datasets. Hence, robust evaluation frameworks are needed using high-quality non-English datasets, especially low-resource languages (LRLs). This study evaluates eight state-of-the-art (SOTA) LLMs on Latvian and Giriama using a Massive Multitask Language Understanding (MMLU) subset curated with native speakers for linguistic and cultural relevance. Giriama is benchmarked for the first time. Our evaluation shows that OpenAI's o1 model outperforms others across all languages, scoring 92.8% in English, 88.8% in Latvian, and 70.8% in Giriama on 0-shot tasks. Mistral-large (35.6%) and Llama-70B IT (41%) have weak performance, on both Latvian and Giriama. Our results underscore the need for localized benchmarks and human evaluations in advancing cultural AI contextualization.