🤖 AI Summary
This study investigates whether large language models (LLMs) can dynamically adapt their value orientation and response content according to users’ national cultural values, operationalized via Hofstede’s five-dimensional cultural framework. Method: We construct culture-specific personas for 36 countries and employ multilingual prompts, complemented by cross-lingual consistency analysis and mixed qualitative–quantitative evaluation to systematically assess LLMs’ cultural recognition capability and value alignment fidelity. Results: While LLMs reliably detect surface-level cultural distinctions—e.g., the individualism–collectivism spectrum—they consistently fail to achieve deep cultural adaptation, exhibiting a “recognition–execution gap” in value alignment. To address this, we propose CultAlign, the first culturally sensitive training framework, and CultAlign-Bench, a reusable, multilingual benchmark for cross-cultural alignment evaluation. These contributions provide both methodological foundations and empirical evidence for modeling, assessing, and improving LLMs’ cultural self-adaptivity.
📝 Abstract
Large Language Models (LLMs) attempt to imitate human behavior by responding to humans in a way that pleases them, including by adhering to their values. However, humans come from diverse cultures with different values. It is critical to understand whether LLMs showcase different values to the user based on the stereotypical values of a user's known country. We prompt different LLMs with a series of advice requests based on 5 Hofstede Cultural Dimensions -- a quantifiable way of representing the values of a country. Throughout each prompt, we incorporate personas representing 36 different countries and, separately, languages predominantly tied to each country to analyze the consistency in the LLMs' cultural understanding. Through our analysis of the responses, we found that LLMs can differentiate between one side of a value and another, as well as understand that countries have differing values, but will not always uphold the values when giving advice, and fail to understand the need to answer differently based on different cultural values. Rooted in these findings, we present recommendations for training value-aligned and culturally sensitive LLMs. More importantly, the methodology and the framework developed here can help further understand and mitigate culture and language alignment issues with LLMs.