🤖 AI Summary
This study addresses the tension between professional fact-checking and community moderation in mitigating misinformation on social media platforms. Method: Through cross-platform policy analysis, comparative case studies of prominent fact-checking mechanisms (with emphasis on community-driven models such as Community Notes), and modeling grounded in social cognition theory, the paper develops a novel “epistemology of fact-checking” framework. Contribution/Results: The framework elucidates the structural interplay among cognitive bias, contextual framing, and consensus formation. Findings indicate that while community moderation excels in speed and scalability, it remains constrained by cognitive biases and cultural heterogeneity—and thus cannot supplant professional verification. Rather, the two modalities are fundamentally complementary. The study provides both a theoretical foundation and actionable design principles for developing layered, collaborative, and accountable hybrid moderation systems.
📝 Abstract
Social media platforms have traditionally relied on internal moderation teams and partnerships with independent fact-checking organizations to identify and flag misleading content. Recently, however, platforms including X (formerly Twitter) and Meta have shifted towards community-driven content moderation by launching their own versions of crowd-sourced fact-checking -- Community Notes. If effectively scaled and governed, such crowd-checking initiatives have the potential to combat misinformation with increased scale and speed as successfully as community-driven efforts once did with spam. Nevertheless, general content moderation, especially for misinformation, is inherently more complex. Public perceptions of truth are often shaped by personal biases, political leanings, and cultural contexts, complicating consensus on what constitutes misleading content. This suggests that community efforts, while valuable, cannot replace the indispensable role of professional fact-checkers. Here we systemically examine the current approaches to misinformation detection across major platforms, explore the emerging role of community-driven moderation, and critically evaluate both the promises and challenges of crowd-checking at scale.