🤖 AI Summary
This study investigates power-related biases among human moderators in online asymmetric power conflicts and examines how these biases are influenced by AI recommendations. Employing a mixed experimental design grounded in real-world consumer–merchant dispute scenarios, the research constructs a human–AI collaborative moderation framework and systematically identifies multiple forms of bias favoring the more powerful party. The findings reveal that AI assistance generally mitigates most of these biases; however, under specific conditions, it paradoxically exacerbates certain types. This work is the first to delineate the manifestations of moderation bias under power asymmetry and to uncover the moderating role of AI, offering empirical evidence and novel insights for optimizing platform moderation systems and designing fairer human–AI collaborative decision-making mechanisms.
📝 Abstract
Online power-asymmetric conflicts are prevalent, and most platforms rely on human moderators to conduct moderation currently. Previous studies have been continuously focusing on investigating human moderation biases in different scenarios, while moderation biases under power-asymmetric conflicts remain unexplored. Therefore, we aim to investigate the types of power-related biases human moderators exhibit in power-asymmetric conflict moderation (RQ1) and further explore the influence of AI's suggestions on these biases (RQ2). For this goal, we conducted a mixed design experiment with 50 participants by leveraging the real conflicts between consumers and merchants as a scenario. Results suggest several biases towards supporting the powerful party within these two moderation modes. AI assistance alleviates most biases of human moderation, but also amplifies a few. Based on these results, we propose several insights into future research on human moderation and human-AI collaborative moderation systems for power-asymmetric conflicts.