🤖 AI Summary
Current AI-driven fact-checking interventions on social media suffer from low communicative efficacy, overly coarse warning labels, and inadequate handling of ambiguous misinformation. Method: We propose a fine-grained, context-adaptive credibility visualization metric designed to respect users’ attentional constraints and cognitive load, shifting emphasis from binary true/false judgments toward critical information evaluation. Through an online controlled experiment (n=537), multi-dimensional perceptual scales, and qualitative interviews, we validate that our metric significantly improves users’ information discernment (p<0.01), system trust (+28%), and adoption intention (+31%), whereas conventional source attribution shows no significant effect. Contribution/Results: This work introduces the first interpretable, hierarchical framework for representing AI fact-checking credibility and provides empirical evidence of its effectiveness and cognitive accessibility in authentic social media contexts.
📝 Abstract
Reducing the spread of misinformation is challenging. AI-based fact verification systems offer a promising solution by addressing the high costs and slow pace of traditional fact-checking. However, the problem of how to effectively communicate the results to users remains unsolved. Warning labels may seem an easy solution, but they fail to account for fuzzy misinformation that is not entirely fake. Additionally, users' limited attention spans and social media information should be taken into account while designing the presentation. The online experiment (n = 537) investigates the impact of sources and granularity on users' perception of information veracity and the system's usefulness and trustworthiness. Findings show that fine-grained indicators enhance nuanced opinions, information awareness, and the intention to use fact-checking systems. Source differences had minimal impact on opinions and perceptions, except for informativeness. Qualitative findings suggest the proposed indicators promote critical thinking. We discuss implications for designing concise, user-friendly AI fact-checking feedback.