đ€ AI Summary
Multilingual large language models exhibit significant inconsistency in cross-lingual factual knowledge transferâcorrectness rates for the same fact vary markedly across query languages.
Method: We construct a benchmark comprising 10,000 factual statements across 13 languages and propose three novel metricsâcross-lingual factual recall, consistency score, and transfer robustnessâto quantify knowledge transfer capability. We further design a multilingual factual evaluation framework integrating cross-lingual consistency analysis with question-answeringâbased factual verification.
Contribution/Results: Experiments reveal systematic cross-lingual generalization deficiencies in state-of-the-art models: their factual outputs are highly query-languageâdependent. To foster reproducible research, we open-source both the benchmark dataset and evaluation toolkit, establishing a foundational resource for multilingual knowledge alignment and transfer studies.
đ Abstract
Multilingual language models (LMs) are expected to recall factual knowledge consistently across languages, yet they often fail to transfer knowledge between languages even when they possess the correct information in one of the languages. For example, we find that an LM may correctly identify Rashed Al Shashai as being from Saudi Arabia when asked in Arabic, but consistently fails to do so when asked in English or Swahili. To systematically investigate this limitation, we introduce a benchmark of 10,000 country-related facts across 13 languages and propose three novel metrics: Factual Recall Score, Knowledge Transferability Score, and Cross-Lingual Factual Knowledge Transferability Score-to quantify factual recall and knowledge transferability in LMs across different languages. Our results reveal fundamental weaknesses in today's state-of-the-art LMs, particularly in cross-lingual generalization where models fail to transfer knowledge effectively across different languages, leading to inconsistent performance sensitive to the language used. Our findings emphasize the need for LMs to recognize language-specific factual reliability and leverage the most trustworthy information across languages. We release our benchmark and evaluation framework to drive future research in multilingual knowledge transfer.