Social Simulations with Large Language Model Risk Utopian Illusion

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a pervasive “utopian bias” in large language models (LLMs) when simulating human social behavior—namely, an over-idealization of actions that diverges from the complexity and heterogeneity of real-world social interaction. This bias, previously unexamined systematically, undermines the validity of LLMs in sociocognitive applications. Method: We propose the first multidimensional analytical framework for assessing social-cognitive biases in LLMs, grounded in multi-agent chatroom simulations. Our framework quantifies five linguistic dimensions: social role bias, primacy effect, positivity bias, social desirability bias, and behavioral consistency. Contribution/Results: Evaluating eight state-of-the-art LLMs across three architectural families, we demonstrate that their behavioral representations are strongly driven by social desirability bias, leading to statistically significant deviations from empirically observed human interaction patterns. This work establishes both theoretical foundations and methodological tools for risk assessment and trustworthy modeling of LLMs in social contexts.

Technology Category

Application Category

📝 Abstract
Reliable simulation of human behavior is essential for explaining, predicting, and intervening in our society. Recent advances in large language models (LLMs) have shown promise in emulating human behaviors, interactions, and decision-making, offering a powerful new lens for social science studies. However, the extent to which LLMs diverge from authentic human behavior in social contexts remains underexplored, posing risks of misinterpretation in scientific studies and unintended consequences in real-world applications. Here, we introduce a systematic framework for analyzing LLMs' behavior in social simulation. Our approach simulates multi-agent interactions through chatroom-style conversations and analyzes them across five linguistic dimensions, providing a simple yet effective method to examine emergent social cognitive biases. We conduct extensive experiments involving eight representative LLMs across three families. Our findings reveal that LLMs do not faithfully reproduce genuine human behavior but instead reflect overly idealized versions of it, shaped by the social desirability bias. In particular, LLMs show social role bias, primacy effect, and positivity bias, resulting in "Utopian" societies that lack the complexity and variability of real human interactions. These findings call for more socially grounded LLMs that capture the diversity of human social behavior.
Problem

Research questions and friction points this paper is trying to address.

Analyzing divergence between LLM-simulated and authentic human social behavior
Identifying emergent cognitive biases in multi-agent language model interactions
Addressing risks of utopian illusions in social science simulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic framework analyzes LLM social behavior
Simulates multi-agent interactions via chatroom conversations
Examines five linguistic dimensions for cognitive biases
🔎 Similar Papers
No similar papers found.
Ning Bian
Ning Bian
China University of Mining and Technology
X
Xianpei Han
Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China.
H
Hongyu Lin
Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China.
B
Baolei Wu
School of Computer Science and Technology, China University of Mining and Technology, Xuzhou, 221116, Jiangsu, China.
J
Jun Wang
School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, 221116, Jiangsu, China.