Disentangling the Drivers of LLM Social Conformity: An Uncertainty-Moderated Dual-Process Mechanism

📅 2025-08-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the dual mechanisms underlying social conformity in large language models (LLMs): informational influence (leveraging group cues to improve judgment accuracy) and quasi-normative influence (conforming under social pressure). Method: Adapting the information cascade paradigm from behavioral economics—previously unapplied to LLMs—we systematically manipulate informational uncertainty (q = 0.55, 0.667, 0.70) across medical, legal, and investment decision-making tasks, evaluating nine state-of-the-art LLMs. Contribution/Results: We find that LLMs attenuate public signal weight under low-to-moderate uncertainty (β ≈ 0.81) but significantly over-rely on it under high uncertainty (β > 1.55), revealing a dual-process conformity mechanism. This demonstrates that LLMs exhibit human-like social cognition, offering a novel theoretical framework and empirical foundation for understanding their collaborative behavior and reliability boundaries.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) integrate into collaborative teams, their social conformity -- the tendency to align with majority opinions -- has emerged as a key concern. In humans, conformity arises from informational influence (rational use of group cues for accuracy) or normative influence (social pressure for approval), with uncertainty moderating this balance by shifting from purely analytical to heuristic processing. It remains unclear whether these human psychological mechanisms apply to LLMs. This study adapts the information cascade paradigm from behavioral economics to quantitatively disentangle the two drivers to investigate the moderate effect. We evaluated nine leading LLMs across three decision-making scenarios (medical, legal, investment), manipulating information uncertainty (q = 0.667, 0.55, and 0.70, respectively). Our results indicate that informational influence underpins the models' behavior across all contexts, with accuracy and confidence consistently rising with stronger evidence. However, this foundational mechanism is dramatically modulated by uncertainty. In low-to-medium uncertainty scenarios, this informational process is expressed as a conservative strategy, where LLMs systematically underweight all evidence sources. In contrast, high uncertainty triggers a critical shift: while still processing information, the models additionally exhibit a normative-like amplification, causing them to overweight public signals (beta > 1.55 vs. private beta = 0.81).
Problem

Research questions and friction points this paper is trying to address.

Investigating LLM social conformity drivers through uncertainty moderation
Disentangling informational vs normative influences in LLM decision-making
Examining uncertainty's role in shifting LLM processing strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapted information cascade paradigm for LLM analysis
Quantified uncertainty moderation in dual-process mechanisms
Measured evidence weighting shifts across decision scenarios
🔎 Similar Papers
H
Huixin Zhong
Xi’an Jiaotong Liverpool University, Suzhou, China
Yanan Liu
Yanan Liu
Lecturer in Shanghai University
In-Sensor ComputingEmbedded AIRobotic Vision and ControlHigh-Speed Vision
Q
Qi Cao
Xi’an Jiaotong Liverpool University, Suzhou, China
Shijin Wang
Shijin Wang
Tongji University
Schedulingmaintenance
Z
Zijing Ye
Xi’an Jiaotong Liverpool University, Suzhou, China
Zimu Wang
Zimu Wang
Tsinghua University
recommendation
S
Shiyao Zhang
Xi’an Jiaotong Liverpool University, Suzhou, China