🤖 AI Summary
This study investigates whether cultural-geographic proximity facilitates cross-lingual transfer of historical knowledge in multilingual large language models (LLMs), particularly for Baltic minority languages—focusing on Lithuanian.
Method: We construct a novel cross-lingual historical multiple-choice question dataset covering Baltic, Nordic, English, Ukrainian, and Arabic, and evaluate mainstream open-weight (Qwen2.5-72B, Llama3.1-70B) and proprietary (GPT-4o) LLMs under a zero-shot multilingual QA framework.
Contribution/Results: Contrary to expectations, cultural proximity does not consistently enhance model performance; GPT-4o significantly outperforms all open models across languages; Nordic-finetuned models fail to surpass general multilingual baselines; and all evaluated models exhibit weak cross-lingual alignment in Baltic historical reasoning. Our work establishes the first benchmark for evaluating historical knowledge retention in Baltic languages and provides empirical evidence highlighting critical gaps in current multilingual LLMs’ cultural-historical grounding—particularly for low-resource, geographically peripheral languages.
📝 Abstract
In this work, we evaluated Lithuanian and general history knowledge of multilingual Large Language Models (LLMs) on a multiple-choice question-answering task. The models were tested on a dataset of Lithuanian national and general history questions translated into Baltic, Nordic, and other languages (English, Ukrainian, Arabic) to assess the knowledge sharing from culturally and historically connected groups. We evaluated GPT-4o, LLaMa3.1 8b and 70b, QWEN2.5 7b and 72b, Mistral Nemo 12b, LLaMa3 8b, Mistral 7b, LLaMa3.2 3b, and Nordic fine-tuned models (GPT-SW3 and LLaMa3 8b). Our results show that GPT-4o consistently outperformed all other models across language groups, with slightly better results for Baltic and Nordic languages. Larger open-source models like QWEN2.5 72b and LLaMa3.1 70b performed well but showed weaker alignment with Baltic languages. Smaller models (Mistral Nemo 12b, LLaMa3.2 3b, QWEN 7B, LLaMa3.1 8B, and LLaMa3 8b) demonstrated gaps with LT-related alignment with Baltic languages while performing better on Nordic and other languages. The Nordic fine-tuned models did not surpass multilingual models, indicating that shared cultural or historical context alone does not guarantee better performance.