Conformity and Social Impact on AI Agents

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates conformity behavior in large language models (LLMs) acting as AI agents within multi-agent environments and the associated safety risks arising from social influence. By replicating a classic social psychology visual experiment and integrating multimodal LLMs with key group influence variables—such as group size, unanimity, and task difficulty—the work provides the first systematic validation that AI agents exhibit conformity tendencies consistent with established social influence theory. The findings reveal that even high-performing models, which demonstrate strong accuracy in isolation, are significantly swayed by group opinions, with their susceptibility intensifying as task complexity increases. This highlights critical vulnerabilities in multi-agent systems, particularly the potential for social manipulation and the propagation of biases through collective dynamics.

Technology Category

Application Category

📝 Abstract
As AI agents increasingly operate in multi-agent environments, understanding their collective behavior becomes critical for predicting the dynamics of artificial societies. This study examines conformity, the tendency to align with group opinions under social pressure, in large multimodal language models functioning as AI agents. By adapting classic visual experiments from social psychology, we investigate how AI agents respond to group influence as social actors. Our experiments reveal that AI agents exhibit a systematic conformity bias, aligned with Social Impact Theory, showing sensitivity to group size, unanimity, task difficulty, and source characteristics. Critically, AI agents achieving near-perfect performance in isolation become highly susceptible to manipulation through social influence. This vulnerability persists across model scales: while larger models show reduced conformity on simple tasks due to improved capabilities, they remain vulnerable when operating at their competence boundary. These findings reveal fundamental security vulnerabilities in AI agent decision-making that could enable malicious manipulation, misinformation campaigns, and bias propagation in multi-agent systems, highlighting the urgent need for safeguards in collective AI deployments.
Problem

Research questions and friction points this paper is trying to address.

conformity
social impact
AI agents
multi-agent systems
decision-making vulnerability
Innovation

Methods, ideas, or system contributions that make the work stand out.

conformity
social impact theory
multi-agent systems
AI vulnerability
large multimodal language models
🔎 Similar Papers
No similar papers found.
A
A. Bellina
Centro Ricerche Enrico Fermi, Piazza del Viminale, 1, I-00184 Rome, Italy; Sony Computer Science Laboratories - Rome, Joint Initiative CREF-SONY, Centro Ricerche Enrico Fermi, Via Panisperna 89/A, 00184, Rome, Italy; Sapienza University of Rome, Physics Dept., P.le A. Moro, 5, I-00185 Rome, Italy
G
G. D. Marzo
Centro Ricerche Enrico Fermi, Piazza del Viminale, 1, I-00184 Rome, Italy; University of Konstanz, Universitaetstrasse 10, 78457 Konstanz, Germany; Complexity Science Hub, Metternichgasse 8, 1030 Vienna, Austria
David Garcia
David Garcia
Professor for Social and Behavioral Data Science, University of Konstanz. Also CSH Vienna and ETHZ
Computational social sciencecollective emotionspolarizationprivacyagent-based modeling