An Empirical Study of Group Conformity in Multi-Agent Systems

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior research lacks systematic investigation into social bias propagation and conformity behavior in large language model (LLM) agents, particularly regarding emergent group-level conformity in socially contentious debates. Method: We design a multi-agent debate framework simulating over 2,500 debates on controversial topics, integrating logistic regression, ANOVA, and dynamic stance tracking to quantify agent capabilities and model neutral agent stance evolution. Contribution/Results: We首次 demonstrate that LLM agents spontaneously exhibit human-like conformity without explicit training; agent intelligence—not agent count—dominates stance convergence; 72% of neutral agents align with the majority stance; bias propagation speed reaches 3.8× that observed in human experiments. We propose “agent influence” as a novel metric for quantifying behavioral impact, providing both theoretical grounding and empirical evidence for modeling LLM social behavior and advancing controllable, governance-oriented AI systems.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) have enabled multi-agent systems that simulate real-world interactions with near-human reasoning. While previous studies have extensively examined biases related to protected attributes such as race, the emergence and propagation of biases on socially contentious issues in multi-agent LLM interactions remain underexplored. This study explores how LLM agents shape public opinion through debates on five contentious topics. By simulating over 2,500 debates, we analyze how initially neutral agents, assigned a centrist disposition, adopt specific stances over time. Statistical analyses reveal significant group conformity mirroring human behavior; LLM agents tend to align with numerically dominant groups or more intelligent agents, exerting a greater influence. These findings underscore the crucial role of agent intelligence in shaping discourse and highlight the risks of bias amplification in online interactions. Our results emphasize the need for policy measures that promote diversity and transparency in LLM-generated discussions to mitigate the risks of bias propagation within anonymous online environments.
Problem

Research questions and friction points this paper is trying to address.

Explores bias emergence in multi-agent LLM debates on contentious topics
Analyzes group conformity in LLM agents mirroring human behavior
Highlights risks of bias amplification in anonymous online interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulates debates with LLM agents
Analyzes group conformity statistically
Highlights bias risks in online interactions
🔎 Similar Papers
No similar papers found.
M
Min Choi
Kim & Chang AI&IT System Center
Keonwoo Kim
Keonwoo Kim
NAVER Cloud
Natural Language Processing
S
Sungwon Chae
Kim & Chang AI&IT System Center
Sangyeob Baek
Sangyeob Baek
Kim & Chang AI&IT System Center