🤖 AI Summary
This paper addresses geographic bias—particularly U.S.-centric dietary preferences—in large language models (LLMs) regarding food culture knowledge. Methodologically, it introduces FmLAMA, the first multilingual food culture factual dataset covering six languages, and proposes a cross-lingual template probing framework augmented with culturally contextualized prompts, empirically evaluated across diverse architectures including LLaMA, BLOOM, and mT5. Key contributions include: (1) the first systematic characterization of the interplay among language, model architecture, and cultural representation; (2) empirical validation that culturally grounded context injection improves cultural knowledge retrieval accuracy by an average of 23.7%; and (3) the release of a reproducible methodology, benchmark dataset, and diagnostic toolkit for advancing culturally equitable LLMs.
📝 Abstract
Recent studies have highlighted the presence of cultural biases in Large Language Models (LLMs), yet often lack a robust methodology to dissect these phenomena comprehensively. Our work aims to bridge this gap by delving into the Food domain, a universally relevant yet culturally diverse aspect of human life. We introduce FmLAMA, a multilingual dataset centered on food-related cultural facts and variations in food practices. We analyze LLMs across various architectures and configurations, evaluating their performance in both monolingual and multilingual settings. By leveraging templates in six different languages, we investigate how LLMs interact with language-specific and cultural knowledge. Our findings reveal that (1) LLMs demonstrate a pronounced bias towards food knowledge prevalent in the United States; (2) Incorporating relevant cultural context significantly improves LLMs' ability to access cultural knowledge; (3) The efficacy of LLMs in capturing cultural nuances is highly dependent on the interplay between the probing language, the specific model architecture, and the cultural context in question. This research underscores the complexity of integrating cultural understanding into LLMs and emphasizes the importance of culturally diverse datasets to mitigate biases and enhance model performance across different cultural domains.