🤖 AI Summary
This study addresses the insufficient value alignment of large language models (LLMs) in multicultural contexts. We propose a parameter-free, fine-tuning-free cultural value self-alignment method that leverages empirical human cultural data—specifically the World Values Survey (WVS)—integrated with in-context learning (ICL). Through multilingual prompt engineering, our approach dynamically adapts model responses during inference to reflect culturally specific preferences across over 20 countries along dimensions such as power distance and individualism. Evaluated on five mainstream multilingual LLMs, the method significantly improves cross-cultural value consistency, generalizes robustly to non-English languages, and ensures interpretable, customizable alignment. Our core contribution is the first zero-shot, cross-lingual value alignment paradigm grounded in empirically validated cultural survey data, establishing a new foundation for culturally aware LLM deployment.
📝 Abstract
Improving the alignment of Large Language Models (LLMs) with respect to the cultural values that they encode has become an increasingly important topic. In this work, we study whether we can exploit existing knowledge about cultural values at inference time to adjust model responses to cultural value probes. We present a simple and inexpensive method that uses a combination of in-context learning (ICL) and human survey data, and show that we can improve the alignment to cultural values across 5 models that include both English-centric and multilingual LLMs. Importantly, we show that our method could prove useful in test languages other than English and can improve alignment to the cultural values that correspond to a range of culturally diverse countries.