On the Alignment of Large Language Models with Global Human Opinion

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the global bias in large language models’ (LLMs) value alignment—existing work focuses narrowly on contemporary, English-language samples from the U.S. and Europe, neglecting systematic cross-national, cross-lingual, and diachronic evaluation. To bridge this gap, we introduce the first benchmark for LLM value alignment grounded in the World Values Survey (WVS), covering multiple countries, languages, and historical time periods. Leveraging prompt engineering and response analysis, we quantify alignment between LLM outputs and human societal attitudes. We propose a novel “prompt-language matching” strategy—aligning the language of prompts with that of survey instruments—and empirically demonstrate its superiority over existing alignment techniques in enhancing country-specific value alignment. Results reveal that current LLMs systematically underestimate non-Western perspectives and exhibit excessive conformity to contemporary mainstream opinions, highlighting critical gaps in global representativeness and temporal sensitivity.

Technology Category

Application Category

📝 Abstract
Today's large language models (LLMs) are capable of supporting multilingual scenarios, allowing users to interact with LLMs in their native languages. When LLMs respond to subjective questions posed by users, they are expected to align with the views of specific demographic groups or historical periods, shaped by the language in which the user interacts with the model. Existing studies mainly focus on researching the opinions represented by LLMs among demographic groups in the United States or a few countries, lacking worldwide country samples and studies on human opinions in different historical periods, as well as lacking discussion on using language to steer LLMs. Moreover, they also overlook the potential influence of prompt language on the alignment of LLMs' opinions. In this study, our goal is to fill these gaps. To this end, we create an evaluation framework based on the World Values Survey (WVS) to systematically assess the alignment of LLMs with human opinions across different countries, languages, and historical periods around the world. We find that LLMs appropriately or over-align the opinions with only a few countries while under-aligning the opinions with most countries. Furthermore, changing the language of the prompt to match the language used in the questionnaire can effectively steer LLMs to align with the opinions of the corresponding country more effectively than existing steering methods. At the same time, LLMs are more aligned with the opinions of the contemporary population. To our knowledge, our study is the first comprehensive investigation of the topic of opinion alignment in LLMs across global, language, and temporal dimensions. Our code and data are publicly available at https://github.com/nlply/global-opinion-alignment.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLM alignment with global human opinions across countries
Investigating language influence on opinion alignment in LLMs
Exploring temporal alignment of LLMs with historical human views
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses World Values Survey for global opinion alignment
Tests language switching to steer model responses
Evaluates temporal and cross-country opinion alignment
🔎 Similar Papers
No similar papers found.