Emergence of human-like polarization among large language model agents

📅 2025-01-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how opinion divergence among large language model (LLM) agents drives sociopolitical polarization and its evolutionary dynamics. We construct a multi-agent dialogue system comprising thousands of LLM agents, integrating natural-language interaction protocols, dynamic social network modeling, and a novel polarization metric based on ideological clustering coefficients. Our experiments demonstrate—without explicit programming—that LLM agents spontaneously self-organize into human-like social network structures and robustly reproduce empirically observed polarization phenomena, including echo chambers and homophilous clustering. This work bridges AI behavioral science and social theory for the first time, empirically validating that LLM collectives exhibit both structural and dynamic human-like opinion evolution. Moreover, it establishes a controllable, reproducible digital testbed for polarization research, offering a new paradigm for understanding, measuring, and intervening in societal polarization processes.

Technology Category

Application Category

📝 Abstract
Rapid advances in large language models (LLMs) have empowered autonomous agents to establish social relationships, communicate, and form shared and diverging opinions on political issues. Our understanding of their collective behaviours and underlying mechanisms remains incomplete, however, posing unexpected risks to human society. In this paper, we simulate a networked system involving thousands of large language model agents, discovering their social interactions, guided through LLM conversation, result in human-like polarization. We discover that these agents spontaneously develop their own social network with human-like properties, including homophilic clustering, but also shape their collective opinions through mechanisms observed in the real world, including the echo chamber effect. Similarities between humans and LLM agents -- encompassing behaviours, mechanisms, and emergent phenomena -- raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate polarization and its consequences.
Problem

Research questions and friction points this paper is trying to address.

Language Models
Social Harmony
Perspective Discrepancies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Social Behavior
Echo Chamber Effect
🔎 Similar Papers
No similar papers found.
Jinghua Piao
Jinghua Piao
Tsinghua University
Z
Zhihong Lu
Department of Electronic Engineering, Tsinghua University, Beijing National Research Center for Information Science and Technology (BNRist), Beijing, P. R. China.
C
Chen Gao
Department of Electronic Engineering, Tsinghua University, Beijing National Research Center for Information Science and Technology (BNRist), Beijing, P. R. China.
Fengli Xu
Fengli Xu
Tsinghua University
LLM AgentData ScienceSocial ComputingScience of ScienceUrban Science
Fernando P. Santos
Fernando P. Santos
Informatics Institute (IvI), University of Amsterdam
multiagent systemscomplex systemsevolutionary game theorynetwork sciencealgorithmic fairness
Y
Yong Li
Department of Electronic Engineering, Tsinghua University, Beijing National Research Center for Information Science and Technology (BNRist), Beijing, P. R. China.
James Evans
James Evans
Max Palevsky Professor of Sociology & Data Science, University of Chicago
science of scienceinnovationsociology of knowledgeartificial intelligencedeep learning