🤖 AI Summary
Contemporary large language models (LLMs) exhibit pervasive Western-centric biases, substantially undermining their effectiveness in non-Western cultural contexts.
Method: This paper introduces the concept of “metacultural competence”—the capacity of LLMs to dynamically adapt, reflect upon, and negotiate unfamiliar cultural situations, moving beyond static representations of cultural knowledge. Leveraging thought experiments and an enhanced version of the Octopus cross-cultural evaluation framework, the study integrates cultural theory, AI interpretability, and multi-dimensional assessment methodologies to systematically articulate a theoretical definition, design principles, and initial evaluation pathways for this competence.
Contribution: The work establishes the first paradigm for evaluating and advancing LLM cultural adaptivity, providing both a foundational theoretical framework and actionable guidelines for developing globally equitable, culturally adaptive AI systems.
📝 Abstract
Numerous recent studies have shown that Large Language Models (LLMs) are biased towards a Western and Anglo-centric worldview, which compromises their usefulness in non-Western cultural settings. However,"culture"is a complex, multifaceted topic, and its awareness, representation, and modeling in LLMs and LLM-based applications can be defined and measured in numerous ways. In this position paper, we ask what does it mean for an LLM to possess"cultural awareness", and through a thought experiment, which is an extension of the Octopus test proposed by Bender and Koller (2020), we argue that it is not cultural awareness or knowledge, rather meta-cultural competence, which is required of an LLM and LLM-based AI system that will make it useful across various, including completely unseen, cultures. We lay out the principles of meta-cultural competence AI systems, and discuss ways to measure and model those.