CARE: Aligning Language Models for Regional Cultural Awareness

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mainstream language models exhibit Western-centric biases and severe underrepresentation of Chinese and Arabic cultural knowledge. Method: We introduce CARE, the first bilingual (Chinese/Arabic), multilingual, human-annotated cultural preference dataset (24.1K samples), providing high-quality, small-scale multilingual supervision signals for cultural alignment. Leveraging human preference annotations, we perform supervised fine-tuning and conduct systematic analysis via cross-lingual consistency evaluation, multi-model-family comparison, and retrieval-augmented cultural knowledge probing. Contribution/Results: Our approach significantly improves cultural alignment across diverse model scales and architectures without compromising general capabilities. We quantitatively demonstrate, for the first time, severe representational imbalance between Chinese and Arabic cultures in mainstream LMs. To foster research on cultural fairness, we publicly release the CARE dataset.

Technology Category

Application Category

📝 Abstract
Existing language models (LMs) often exhibit a Western-centric bias and struggle to represent diverse cultural knowledge. Previous attempts to address this rely on synthetic data and express cultural knowledge only in English. In this work, we study whether a small amount of human-written, multilingual cultural preference data can improve LMs across various model families and sizes. We first introduce CARE, a multilingual resource of 24.1k responses with human preferences on 2,580 questions about Chinese and Arab cultures, all carefully annotated by native speakers and offering more balanced coverage. Using CARE, we demonstrate that cultural alignment improves existing LMs beyond generic resources without compromising general capabilities. Moreover, we evaluate the cultural awareness of LMs, native speakers, and retrieved web content when queried in different languages. Our experiment reveals regional disparities among LMs, which may also be reflected in the documentation gap: native speakers often take everyday cultural commonsense and social norms for granted, while non-natives are more likely to actively seek out and document them. CARE is publicly available at https://github.com/Guochry/CARE (we plan to add Japanese data in the near future).
Problem

Research questions and friction points this paper is trying to address.

Addressing Western-centric bias in language models
Improving cultural representation with multilingual data
Evaluating regional disparities in cultural awareness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual cultural preference data for alignment
Human-written responses from native speakers
Improves LMs without general capability loss
🔎 Similar Papers
No similar papers found.