Spiral of Silence in Large Language Model Agents

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs), when deployed as autonomous agent populations, spontaneously exhibit “spiral of silence”-like opinion convergence under purely statistical generative mechanisms. Using controlled experiments, we systematically manipulate two classes of signals—historical dialogue context and role-based identity—and analyze opinion dynamics via Mann-Kendall trend tests, Spearman rank correlation, kurtosis, and interquartile range. Results demonstrate that statistically significant majority dominance and spiral-of-silence patterns emerge *only* when historical and role signals jointly co-occur; isolated manipulation of either signal leads instead to strong anchoring or opinion dispersion. This work challenges the conventional Spiral of Silence (SoS) theory—which presumes human-specific psychological mechanisms—by establishing that social consensus formation can arise computationally from purely statistical model interactions. It thus introduces a novel paradigm for modeling collective AI behavior grounded in measurable, mechanistic social dynamics.

Technology Category

Application Category

📝 Abstract
The Spiral of Silence (SoS) theory holds that individuals with minority views often refrain from speaking out for fear of social isolation, enabling majority positions to dominate public discourse. When the 'agents' are large language models (LLMs), however, the classical psychological explanation is not directly applicable, since SoS was developed for human societies. This raises a central question: can SoS-like dynamics nevertheless emerge from purely statistical language generation in LLM collectives? We propose an evaluation framework for examining SoS in LLM agents. Specifically, we consider four controlled conditions that systematically vary the availability of 'History' and 'Persona' signals. Opinion dynamics are assessed using trend tests such as Mann-Kendall and Spearman's rank, along with concentration measures including kurtosis and interquartile range. Experiments across open-source and closed-source models show that history and persona together produce strong majority dominance and replicate SoS patterns; history signals alone induce strong anchoring; and persona signals alone foster diverse but uncorrelated opinions, indicating that without historical anchoring, SoS dynamics cannot emerge. The work bridges computational sociology and responsible AI design, highlighting the need to monitor and mitigate emergent conformity in LLM-agent systems.
Problem

Research questions and friction points this paper is trying to address.

Investigating Spiral of Silence dynamics in LLM agent collectives
Evaluating how history and persona signals affect opinion formation
Assessing emergent conformity patterns in large language model systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates SoS dynamics using history and persona signals
Assesses opinion trends with statistical trend tests
Identifies conditions for majority dominance in LLMs
🔎 Similar Papers
No similar papers found.
M
Mingze Zhong
AAII, University of Technology Sydney, NSW, Australia
Meng Fang
Meng Fang
University of Liverpool
Natural Language ProcessingReinforcement LearningAgentsArtificial intelligence
Zijing Shi
Zijing Shi
University of Technology Sydney
Natural Language ProcessingReinforcement Learning
Y
Yuxuan Huang
University of Liverpool, Liverpool, UK
S
Shunfeng Zheng
AAII, University of Technology Sydney, NSW, Australia
Yali Du
Yali Du
Turing Fellow, Associate professor, King's College London
Multi-Agent Reinforcement LearningHuman-ai coordinationAlignmentCooperative AI
L
Ling Chen
AAII, University of Technology Sydney, NSW, Australia
J
Jun Wang
University College London, London, UK