Power Echoes: Investigating Moderation Biases in Online Power-Asymmetric Conflicts

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates power-related biases among human moderators in online asymmetric power conflicts and examines how these biases are influenced by AI recommendations. Employing a mixed experimental design grounded in real-world consumer–merchant dispute scenarios, the research constructs a human–AI collaborative moderation framework and systematically identifies multiple forms of bias favoring the more powerful party. The findings reveal that AI assistance generally mitigates most of these biases; however, under specific conditions, it paradoxically exacerbates certain types. This work is the first to delineate the manifestations of moderation bias under power asymmetry and to uncover the moderating role of AI, offering empirical evidence and novel insights for optimizing platform moderation systems and designing fairer human–AI collaborative decision-making mechanisms.

Technology Category

Application Category

📝 Abstract
Online power-asymmetric conflicts are prevalent, and most platforms rely on human moderators to conduct moderation currently. Previous studies have been continuously focusing on investigating human moderation biases in different scenarios, while moderation biases under power-asymmetric conflicts remain unexplored. Therefore, we aim to investigate the types of power-related biases human moderators exhibit in power-asymmetric conflict moderation (RQ1) and further explore the influence of AI's suggestions on these biases (RQ2). For this goal, we conducted a mixed design experiment with 50 participants by leveraging the real conflicts between consumers and merchants as a scenario. Results suggest several biases towards supporting the powerful party within these two moderation modes. AI assistance alleviates most biases of human moderation, but also amplifies a few. Based on these results, we propose several insights into future research on human moderation and human-AI collaborative moderation systems for power-asymmetric conflicts.
Problem

Research questions and friction points this paper is trying to address.

power-asymmetric conflicts
moderation biases
human moderation
AI assistance
online content moderation
Innovation

Methods, ideas, or system contributions that make the work stand out.

power-asymmetric conflicts
moderation bias
human-AI collaboration
content moderation
algorithmic fairness
🔎 Similar Papers
No similar papers found.
Y
Yaqiong Li
Fudan University
P
Peng Zhang
Fudan University
P
Peixu Hou
Meituan
K
Kainan Tu
Fudan University
G
Guangping Zhang
Fudan University
S
Shan Qu
Meituan
W
Wenshi Chen
Meituan
Yan Chen
Yan Chen
Virginia Tech, Assistant Professor
Human computer interactionprogramming support toolsCS EducationLearning @ Scale
Ning Gu
Ning Gu
Fudan University
Collaborative ComputingCSCWSocial ComputingHuman Computer InteractionRecommendation
T
Tun Lu
Fudan University