🤖 AI Summary
This study investigates the dual mechanisms underlying social conformity in large language models (LLMs): informational influence (leveraging group cues to improve judgment accuracy) and quasi-normative influence (conforming under social pressure). Method: Adapting the information cascade paradigm from behavioral economics—previously unapplied to LLMs—we systematically manipulate informational uncertainty (q = 0.55, 0.667, 0.70) across medical, legal, and investment decision-making tasks, evaluating nine state-of-the-art LLMs. Contribution/Results: We find that LLMs attenuate public signal weight under low-to-moderate uncertainty (β ≈ 0.81) but significantly over-rely on it under high uncertainty (β > 1.55), revealing a dual-process conformity mechanism. This demonstrates that LLMs exhibit human-like social cognition, offering a novel theoretical framework and empirical foundation for understanding their collaborative behavior and reliability boundaries.
📝 Abstract
As large language models (LLMs) integrate into collaborative teams, their social conformity -- the tendency to align with majority opinions -- has emerged as a key concern. In humans, conformity arises from informational influence (rational use of group cues for accuracy) or normative influence (social pressure for approval), with uncertainty moderating this balance by shifting from purely analytical to heuristic processing. It remains unclear whether these human psychological mechanisms apply to LLMs. This study adapts the information cascade paradigm from behavioral economics to quantitatively disentangle the two drivers to investigate the moderate effect. We evaluated nine leading LLMs across three decision-making scenarios (medical, legal, investment), manipulating information uncertainty (q = 0.667, 0.55, and 0.70, respectively). Our results indicate that informational influence underpins the models' behavior across all contexts, with accuracy and confidence consistently rising with stronger evidence. However, this foundational mechanism is dramatically modulated by uncertainty. In low-to-medium uncertainty scenarios, this informational process is expressed as a conservative strategy, where LLMs systematically underweight all evidence sources. In contrast, high uncertainty triggers a critical shift: while still processing information, the models additionally exhibit a normative-like amplification, causing them to overweight public signals (beta > 1.55 vs. private beta = 0.81).