Made-in China, Thinking in America:U.S. Values Persist in Chinese LLMs

πŸ“… 2025-12-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates whether large language models (LLMs) developed in China and the United States align with the moral values of their respective domestic publics. Method: Leveraging Moral Foundations Theory (MFQ 2.0) and the World Values Survey (WVS), we construct the first cross-cultural, multi-model, empirically grounded benchmark for value alignment, evaluating ten mainstream LLMs from each country. Contribution/Results: All Chinese LLMs exhibit statistically significant alignment with U.S. public values (p < 0.001), revealing a latent β€œmade-in-China, think-in-U.S.” value shift. Neither Chinese-language prompting (reducing bias by only ~12%) nor explicit Chinese role assignment meaningfully mitigates this misalignment. This work provides the first systematic empirical evidence of global cultural misalignment in LLMs, offering a reproducible methodological framework and critical empirical foundation for AI value governance and soft power assessment.

Technology Category

Application Category

πŸ“ Abstract
As large language models increasingly mediate access to information and facilitate decision-making, they are becoming instruments in soft power competitions between global actors such as the United States and China. So far, language models seem to be aligned with the values of Western countries, but evidence for this ethical bias comes mostly from models made by American companies. The current crop of state-of-the-art models includes several made in China, so we conducted the first large-scale investigation of how models made in China and the USA align with people from China and the USA. We elicited responses to the Moral Foundations Questionnaire 2.0 and the World Values Survey from ten Chinese models and ten American models, and we compared their responses to responses from thousands of Chinese and American people. We found that all models respond to both surveys more like American people than like Chinese people. This skew toward American values is only slightly mitigated when prompting the models in Chinese or imposing a Chinese persona on the models. These findings have important implications for a near future in which large language models generate much of the content people consume and shape normative influence in geopolitics.
Problem

Research questions and friction points this paper is trying to address.

Investigates ethical bias in Chinese and American large language models
Compares model alignment with human values from China and the USA
Assesses geopolitical implications of value skew in AI content generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comparing Chinese and American LLMs using Moral Foundations Questionnaire
Analyzing value alignment through World Values Survey responses
Testing language and persona effects on model value biases
D
David Haslett
Hong Kong University of Science and Technology
L
Linus Ta-Lun Huang
Chinese University of Hong Kong
L
Leila Khalatbari
Hong Kong University of Science and Technology, Sharif University of Technology
J
Janet Hui-wen Hsiao
Hong Kong University of Science and Technology
Antoni B. Chan
Antoni B. Chan
Professor of Computer Science, City University of Hong Kong
Computer VisionMachine LearningSurveillanceEye Gaze AnalysisComputer Audition