Social Dynamics as Critical Vulnerabilities that Undermine Objective Decision-Making in LLM Collectives

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the susceptibility of large language model (LLM) agents—acting as human proxies in multi-agent systems—to social influences that compromise objective judgment during collective opinion integration. For the first time, key social psychological mechanisms, including conformity, perceived expertise, dominant-speaker effects, and rhetorical persuasion, are systematically incorporated into the analysis of LLM-based group decision-making. Through controlled experiments manipulating adversarial group size, relative agent competence, argument length, and rhetorical style, the research quantifies how social pressure degrades decision accuracy. Results demonstrate that larger opposing groups, higher-competence peers, and longer arguments significantly reduce judgmental accuracy, while rhetorical strategies emphasizing credibility or logical structure can deliberately induce bias. These findings reveal human-like vulnerabilities in LLM agents’ social cognition, offering a theoretical foundation for enhancing the robustness of AI-driven collective decision-making.
📝 Abstract
Large language model (LLM) agents are increasingly acting as human delegates in multi-agent environments, where a representative agent integrates diverse peer perspectives to make a final decision. Drawing inspiration from social psychology, we investigate how the reliability of this representative agent is undermined by the social context of its network. We define four key phenomena-social conformity, perceived expertise, dominant speaker effect, and rhetorical persuasion-and systematically manipulate the number of adversaries, relative intelligence, argument length, and argumentative styles. Our experiments demonstrate that the representative agent's accuracy consistently declines as social pressure increases: larger adversarial groups, more capable peers, and longer arguments all lead to significant performance degradation. Furthermore, rhetorical strategies emphasizing credibility or logic can further sway the agent's judgment, depending on the context. These findings reveal that multi-agent systems are sensitive not only to individual reasoning but also to the social dynamics of their configuration, highlighting critical vulnerabilities in AI delegates that mirror the psychological biases observed in human group decision-making.
Problem

Research questions and friction points this paper is trying to address.

social dynamics
LLM collectives
objective decision-making
social conformity
multi-agent systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

social dynamics
LLM collectives
representative agent
social conformity
rhetorical persuasion
🔎 Similar Papers
No similar papers found.