Family Matters: Language Transfer and Merging for Adapting Small LLMs to Faroese

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of adapting small, efficient large language models (LLMs) to Faroese—a low-resource North Germanic language. Methodologically, it proposes a synergistic cross-lingual transfer and model fusion framework: starting from an English base model, it performs continued pretraining or weight merging using closely related Scandinavian languages (e.g., Icelandic, Danish), followed by Faroese-specific optimization via full fine-tuning and parameter-efficient techniques such as LoRA. Key contributions include: (1) the first minimal-pair Faroese benchmark suite; (2) a linguist-led human evaluation protocol; and (3) empirical findings demonstrating that Icelandic transfer markedly improves generation accuracy, while Danish transfer benefits comprehension tasks; LoRA enhances linguistic acceptability, whereas full fine-tuning better preserves original model capabilities and yields superior understanding performance.

Technology Category

Application Category

📝 Abstract
We investigate how to adapt small, efficient LLMs to Faroese, a low-resource North Germanic language. Starting from English models, we continue pre-training on related Scandinavian languages, either individually or combined via merging, before fine-tuning on Faroese. We compare full fine-tuning with parameter-efficient tuning using LoRA, evaluating their impact on both linguistic accuracy and text comprehension. Due to the lack of existing Faroese evaluation data, we construct two new minimal-pair benchmarks from adapted and newly collected datasets and complement them with human evaluations by Faroese linguists. Our results demonstrate that transfer from related languages is crucial, though the optimal source language depends on the task: Icelandic enhances linguistic accuracy, whereas Danish boosts comprehension. Similarly, the choice between full fine-tuning and LoRA is task-dependent: LoRA improves linguistic acceptability and slightly increases human evaluation scores on the base model, while full fine-tuning yields stronger comprehension performance and better preserves model capabilities during downstream fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Adapting small LLMs to low-resource Faroese language
Evaluating transfer learning from related Scandinavian languages
Comparing full fine-tuning versus LoRA for task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transfer learning from related Scandinavian languages
Comparing full fine-tuning versus LoRA parameter-efficient tuning
Creating minimal-pair benchmarks with human evaluations
🔎 Similar Papers
No similar papers found.