Do You Know About My Nation? Investigating Multilingual Language Models' Cultural Literacy Through Factual Knowledge

📅 2025-11-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multilingual QA benchmarks suffer from geographically narrow coverage and Western-centric biases, hindering fair evaluation of models’ understanding of non-Western cultural facts. To address this, we introduce XNationQA—a new evaluation benchmark comprising 49,280 questions across seven languages and nine countries, spanning geography, history, and culture. We propose two novel cross-lingual transfer metrics to systematically assess cultural literacy in multilingual large language models (MLLMs). Experiments on eight state-of-the-art models, under a cultural-domain categorization and cross-lingual knowledge transfer analysis framework, reveal three key findings: (1) models consistently exhibit weaker cultural understanding in their native languages than in English; (2) open-source models show limited cross-lingual transfer capability—particularly for non-Western languages; and (3) linguistic support does not entail equivalent cultural knowledge proficiency. This work fills a critical gap in culturally diverse evaluation and uncovers structural biases in MLLMs’ acquisition of factual, culture-grounded knowledge.

Technology Category

Application Category

📝 Abstract
Most multilingual question-answering benchmarks, while covering a diverse pool of languages, do not factor in regional diversity in the information they capture and tend to be Western-centric. This introduces a significant gap in fairly evaluating multilingual models' comprehension of factual information from diverse geographical locations. To address this, we introduce XNationQA for investigating the cultural literacy of multilingual LLMs. XNationQA encompasses a total of 49,280 questions on the geography, culture, and history of nine countries, presented in seven languages. We benchmark eight standard multilingual LLMs on XNationQA and evaluate them using two novel transference metrics. Our analyses uncover a considerable discrepancy in the models' accessibility to culturally specific facts across languages. Notably, we often find that a model demonstrates greater knowledge of cultural information in English than in the dominant language of the respective culture. The models exhibit better performance in Western languages, although this does not necessarily translate to being more literate for Western countries, which is counterintuitive. Furthermore, we observe that models have a very limited ability to transfer knowledge across languages, particularly evident in open-source models.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multilingual models' cultural literacy across diverse geographical regions
Addressing Western-centric bias in multilingual question-answering benchmarks
Assessing knowledge transfer of cultural facts across different languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduced XNationQA dataset for cultural literacy evaluation
Benchmarked multilingual LLMs using novel transference metrics
Analyzed performance discrepancies across languages and cultures