🤖 AI Summary
X’s Community Notes introduced a “request notes” feature to enhance scalability of crowdsourced fact-checking, yet its impact on coverage, subject distribution, and annotation quality remains unclear. Method: Leveraging a dataset of 98,685 requested posts and associated notes, we conduct quantitative analysis and statistical modeling to assess the mechanism’s real-world effects. Contribution/Results: (1) Requesting notes does not significantly alter partisan bias in political content verification; (2) High-contributing annotators exhibit selective responsiveness to misleading content—producing higher-quality, more neutral, and less polarized notes; (3) Only 12% of requests yield high-quality notes, yet these receive significantly better user ratings. Findings indicate that incentive structures and selective participation—not mere request volume—are critical levers for improving the efficacy and credibility of platform-mediated fact-checking. This study provides key empirical evidence for designing scalable, trustworthy crowdsourced verification systems.
📝 Abstract
X's Community Notes is a crowdsourced fact-checking system. To improve its scalability, X recently introduced "Request Community Note" feature, enabling users to solicit fact-checks from contributors on specific posts. Yet, its implications for the system -- what gets checked, by whom, and with what quality -- remain unclear. Using 98,685 requested posts and their associated notes, we evaluate how requests shape the Community Notes system. We find that contributors prioritize posts with higher misleadingness and from authors with greater misinformation exposure, but neglect political content emphasized by requestors. Selection also diverges along partisan lines: contributors more often annotate posts from Republicans, while requestors surface more from Democrats. Although only 12% of posts receive request-fostered notes from top contributors, these notes are rated as more helpful and less polarized than others, partly reflecting top contributors' selective fact-checking of misleading posts. Our findings highlight both the limitations and promise of requests for scaling high-quality community-based fact-checking.