Community Moderation and the New Epistemology of Fact Checking on Social Media

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the tension between professional fact-checking and community moderation in mitigating misinformation on social media platforms. Method: Through cross-platform policy analysis, comparative case studies of prominent fact-checking mechanisms (with emphasis on community-driven models such as Community Notes), and modeling grounded in social cognition theory, the paper develops a novel “epistemology of fact-checking” framework. Contribution/Results: The framework elucidates the structural interplay among cognitive bias, contextual framing, and consensus formation. Findings indicate that while community moderation excels in speed and scalability, it remains constrained by cognitive biases and cultural heterogeneity—and thus cannot supplant professional verification. Rather, the two modalities are fundamentally complementary. The study provides both a theoretical foundation and actionable design principles for developing layered, collaborative, and accountable hybrid moderation systems.

Technology Category

Application Category

📝 Abstract
Social media platforms have traditionally relied on internal moderation teams and partnerships with independent fact-checking organizations to identify and flag misleading content. Recently, however, platforms including X (formerly Twitter) and Meta have shifted towards community-driven content moderation by launching their own versions of crowd-sourced fact-checking -- Community Notes. If effectively scaled and governed, such crowd-checking initiatives have the potential to combat misinformation with increased scale and speed as successfully as community-driven efforts once did with spam. Nevertheless, general content moderation, especially for misinformation, is inherently more complex. Public perceptions of truth are often shaped by personal biases, political leanings, and cultural contexts, complicating consensus on what constitutes misleading content. This suggests that community efforts, while valuable, cannot replace the indispensable role of professional fact-checkers. Here we systemically examine the current approaches to misinformation detection across major platforms, explore the emerging role of community-driven moderation, and critically evaluate both the promises and challenges of crowd-checking at scale.
Problem

Research questions and friction points this paper is trying to address.

Examining community-driven moderation for misinformation detection on social media
Exploring challenges of crowd-sourced fact-checking versus professional methods
Evaluating scalability and effectiveness of community notes in combating misinformation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Community-driven content moderation replaces traditional teams
Crowd-sourced fact-checking scales misinformation detection
Hybrid approach balances community and professional fact-checkers
Isabelle Augenstein
Isabelle Augenstein
Full Professor, Department of Computer Science, University of Copenhagen
Natural Language ProcessingMachine Learning
Michiel Bakker
Michiel Bakker
Google DeepMind, Massachusetts Institute of Technology
Machine LearningAI SafetyLarge Language ModelsComputational Social Science
T
Tanmoy Chakraborty
Indian Institute of Technology Delhi, New Delhi, 110016, India
David Corney
David Corney
Full Fact
machine learningNLPtext analyticsmedia
Emilio Ferrara
Emilio Ferrara
Professor of Computer Science at the University of Southern California
Human-Centered AISocial ComputingNetwork ScienceAI SafetyComputational Social Science
Iryna Gurevych
Iryna Gurevych
Full Professor, TU Darmstadt; Adjunct Professor, MBZUAI, UAE; Affiliated Professor, INSAIT, Bulgaria
Natural Language ProcessingLarge Language ModelsArtificial Intelligence
S
Scott Hale
University of Oxford, Broad St, Oxford OX1 3AZ, United Kingdom
Eduard Hovy
Eduard Hovy
University of Melbourne, CMU
NLPAI
Heng Ji
Heng Ji
Professor of Computer Science, AICE Director, ASKS Director, UIUC, Amazon Scholar
Natural Language ProcessingLarge Language Models
Irene Larraz
Irene Larraz
Universidad de Navarra
JournalismFact-checkingDisinformationDebunking
Filippo Menczer
Filippo Menczer
Luddy Distinguished Professor of Informatics and Computer Science, Indiana University
MisinformationWeb ScienceNetwork ScienceComputational Social ScienceSocial Media
Preslav Nakov
Preslav Nakov
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
Computational LinguisticsLarge Language ModelsFact-checkingFake News
Paolo Papotti
Paolo Papotti
Professor at EURECOM
Data ManagementInformation QualityLLMs
Dhruv Sahnan
Dhruv Sahnan
PhD Student in NLP @ MBZUAI
misinformationdisinformationfact-checkinghuman-ai collaboration for fact-checking
G
Greta Warren
University of Copenhagen, Nørregade 10, 1172 København, Denmark
G
Giovanni Zagni
Pagella Politica/Facta, viale Monza 259/265, Milano, 20125, Italy