Are Fact-Checking Tools Helpful? An Exploration of the Usability of Google Fact Check

📅 2024-02-20
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
The practical efficacy of existing fact-checking search engines—such as Google Fact Check—in mitigating social media misinformation remains systematically unassessed. Method: This study conducts the first large-scale empirical evaluation of Google Fact Check’s coverage, reliability, and linguistic sensitivity across 1,000 false claims related to COVID-19 and the 2020 U.S. presidential election, employing a multimodal assessment framework integrating claim categorization, sentiment analysis, and textual statistics. Contribution/Results: We find that Google Fact Check covers only 15.8% of false claims, yet achieves high result reliability. Crucially, we demonstrate that claim sentiment polarity, length, lexical choice, and paraphrastic variants significantly degrade retrieval performance—revealing strong wording sensitivity. Based on these findings, we propose actionable, empirically grounded user-input optimization strategies. This work establishes the first large-scale, evidence-based evaluation paradigm and practical guidelines for enhancing the usability and robustness of automated fact-checking tools.

Technology Category

Application Category

📝 Abstract
Fact-checking-specific search tools such as Google Fact Check are a promising way to combat misinformation on social media, especially during events bringing significant social influence, such as the COVID-19 pandemic and the U.S. presidential elections. However, the usability of such an approach has not been thoroughly studied. We evaluated the performance of Google Fact Check by analyzing the retrieved fact-checking results regarding 1,000 COVID-19-related false claims and found it able to retrieve the fact-checking results for 15.8% of the input claims, and the rendered results are relatively reliable. We also found that the false claims receiving different fact-checking verdicts (i.e.,"False,""Partly False,""True,"and"Unratable") tend to reflect diverse emotional tones, and fact-checking sources tend to check the claims in different lengths and using dictionary words to various extents. Claim variations addressing the same issue yet described differently are likely to retrieve distinct fact-checking results. We suggest that the quantities of the retrieved fact-checking results could be optimized and that slightly adjusting input wording may be the best practice for users to retrieve more useful information. This study aims to contribute to the understanding of state-of-the-art fact-checking tools and information integrity.
Problem

Research questions and friction points this paper is trying to address.

Evaluating Google Fact Check tool's usability against misinformation
Assessing retrieval performance for COVID-19 related false claims
Analyzing how claim variations affect fact-checking results accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated Google Fact Check tool performance
Analyzed 1000 COVID-19 false claims results
Suggested input wording adjustments optimization
🔎 Similar Papers
No similar papers found.
Q
Qiangeng Yang
University of Florida
T
Tess Christensen
University of Florida
S
Shlok Gilda
University of Florida
Juliana Fernandes
Juliana Fernandes
University of Florida
Daniela Oliveira
Daniela Oliveira
National Science Foundation