ALIGN: Word Association Learning for Cross-Cultural Generalization in Large Language Models

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit cultural biases inherited from monolithic pretraining corpora, hindering cross-cultural adaptability. To address this, we propose a human-cognition-inspired word association learning framework that leverages native-speaker free association data (Small-World-of-Words English–Chinese corpus). Using parameter-efficient fine-tuning—specifically supervised fine-tuning (SFT) followed by PPO-based preference optimization—we achieve cross-cultural value alignment without re-pretraining. Evaluated on Llama-3.1-8B and Qwen-2.5-7B, our method attains superior cultural adaptation using only minimal culture-association data: Precision@5 in word association improves by 16–165% over baselines; affective ratings (valence and arousal) match human-level performance; Chinese response rates in value surveys double, while U.S.-centric cultural bias decreases by 33%. This work is the first to systematically integrate the cognitive psychology paradigm of free association into LLM cultural alignment, enabling lightweight, interpretable, and highly effective culture-aware modeling.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) increasingly mediate cross-cultural communication, their behavior still reflects the distributional bias of the languages and viewpoints that are over-represented in their pre-training corpora. Yet, it remains a challenge to model and align culture due to limited cultural knowledge and a lack of exploration into effective learning approaches. We introduce a cost-efficient, cognitively grounded remedy: parameter-efficient fine-tuning on native speakers' free word-association norms, which encode implicit cultural schemas. Leveraging English-US and Mandarin associations from the Small-World-of-Words project, we adapt Llama-3.1-8B and Qwen-2.5-7B via supervised fine-tuning (SFT) and PPO-based preference optimization. SFT boosts held-out association Precision at 5 by 16-20% in English and 43-165% in Mandarin, lifts median concreteness by +0.20, and attains human-level valence and arousal. These lexical gains transfer: on World-Values-Survey questions, fine-tuned models shift answer distributions toward the target culture, and on a 50-item high-tension subset, Qwen's Chinese-aligned responses double while Llama's US bias drops by one-third. Our 7-8B models rival or beat vanilla 70B baselines, showing that a few million culture-grounded associations can instill value alignment without costly retraining. Our work highlights both the promise and the need for future research grounded in human cognition in improving cultural alignment in AI models.
Problem

Research questions and friction points this paper is trying to address.

Addressing cultural bias in LLMs from over-represented pre-training data
Aligning AI models with cultural schemas using word association norms
Improving cross-cultural generalization without costly full retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-efficient fine-tuning on word-association norms
Supervised fine-tuning and PPO-based preference optimization
Uses native speakers' free word-association data
🔎 Similar Papers
No similar papers found.
Chunhua Liu
Chunhua Liu
PhD, School of Computing and Information Systems, The University of Melbourne
natural language processingdeep learningcomputational linguistics
K
Kabir Manandhar Shrestha
Melbourne Data Analytics Platform, The University of Melbourne
S
Sukai Huang
Faculty of Information Technology, Monash University