🤖 AI Summary
Large language models (LLMs) exhibit cultural biases inherited from monolithic pretraining corpora, hindering cross-cultural adaptability. To address this, we propose a human-cognition-inspired word association learning framework that leverages native-speaker free association data (Small-World-of-Words English–Chinese corpus). Using parameter-efficient fine-tuning—specifically supervised fine-tuning (SFT) followed by PPO-based preference optimization—we achieve cross-cultural value alignment without re-pretraining. Evaluated on Llama-3.1-8B and Qwen-2.5-7B, our method attains superior cultural adaptation using only minimal culture-association data: Precision@5 in word association improves by 16–165% over baselines; affective ratings (valence and arousal) match human-level performance; Chinese response rates in value surveys double, while U.S.-centric cultural bias decreases by 33%. This work is the first to systematically integrate the cognitive psychology paradigm of free association into LLM cultural alignment, enabling lightweight, interpretable, and highly effective culture-aware modeling.
📝 Abstract
As large language models (LLMs) increasingly mediate cross-cultural communication, their behavior still reflects the distributional bias of the languages and viewpoints that are over-represented in their pre-training corpora. Yet, it remains a challenge to model and align culture due to limited cultural knowledge and a lack of exploration into effective learning approaches. We introduce a cost-efficient, cognitively grounded remedy: parameter-efficient fine-tuning on native speakers' free word-association norms, which encode implicit cultural schemas. Leveraging English-US and Mandarin associations from the Small-World-of-Words project, we adapt Llama-3.1-8B and Qwen-2.5-7B via supervised fine-tuning (SFT) and PPO-based preference optimization. SFT boosts held-out association Precision at 5 by 16-20% in English and 43-165% in Mandarin, lifts median concreteness by +0.20, and attains human-level valence and arousal. These lexical gains transfer: on World-Values-Survey questions, fine-tuned models shift answer distributions toward the target culture, and on a 50-item high-tension subset, Qwen's Chinese-aligned responses double while Llama's US bias drops by one-third. Our 7-8B models rival or beat vanilla 70B baselines, showing that a few million culture-grounded associations can instill value alignment without costly retraining. Our work highlights both the promise and the need for future research grounded in human cognition in improving cultural alignment in AI models.