It's All About In-Context Learning! Teaching Extremely Low-Resource Languages to LLMs

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited support for extremely low-resource languages—particularly those employing rare scripts—by large language models (LLMs). We systematically evaluate in-context learning (ICL) as a language adaptation mechanism, comparing zero-shot and few-shot ICL against parameter-efficient fine-tuning (PEFT) and approaches incorporating explicit language alignment signals. For the first time, we empirically assess ICL’s effectiveness across 21 extremely low-resource languages. Results show that zero-shot ICL augmented with explicit language alignment significantly outperforms PEFT in zero-training-data settings, achieving superior cross-lingual generalization. The study establishes a novel, fine-tuning-free language adaptation paradigm, offering scalable, low-barrier practical guidelines for deploying LLMs in resource-scarce linguistic contexts.

Technology Category

Application Category

📝 Abstract
Extremely low-resource languages, especially those written in rare scripts, as shown in Figure 1, remain largely unsupported by large language models (LLMs). This is due in part to compounding factors such as the lack of training data. This paper delivers the first comprehensive analysis of whether LLMs can acquire such languages purely via in-context learning (ICL), with or without auxiliary alignment signals, and how these methods compare to parameter-efficient fine-tuning (PEFT). We systematically evaluate 20 under-represented languages across three state-of-the-art multilingual LLMs. Our findings highlight the limitation of PEFT when both language and its script are extremely under-represented by the LLM. In contrast, zero-shot ICL with language alignment is impressively effective on extremely low-resource languages, while few-shot ICL or PEFT is more beneficial for languages relatively better represented by LLMs. For LLM practitioners working on extremely low-resource languages, we summarise guidelines grounded by our results on adapting LLMs to low-resource languages, e.g., avoiding fine-tuning a multilingual model on languages of unseen scripts.
Problem

Research questions and friction points this paper is trying to address.

Teaching low-resource languages to LLMs via in-context learning
Evaluating ICL versus fine-tuning for underrepresented languages
Addressing limitations of multilingual LLMs with rare scripts
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-context learning with language alignment
Zero-shot ICL for extremely low-resource languages
Avoiding fine-tuning on unseen script languages
🔎 Similar Papers
No similar papers found.