More Than Just Warnings:Exploring the Ways of Communicating Credibility Assessment on Social Media

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI-driven fact-checking interventions on social media suffer from low communicative efficacy, overly coarse warning labels, and inadequate handling of ambiguous misinformation. Method: We propose a fine-grained, context-adaptive credibility visualization metric designed to respect users’ attentional constraints and cognitive load, shifting emphasis from binary true/false judgments toward critical information evaluation. Through an online controlled experiment (n=537), multi-dimensional perceptual scales, and qualitative interviews, we validate that our metric significantly improves users’ information discernment (p<0.01), system trust (+28%), and adoption intention (+31%), whereas conventional source attribution shows no significant effect. Contribution/Results: This work introduces the first interpretable, hierarchical framework for representing AI fact-checking credibility and provides empirical evidence of its effectiveness and cognitive accessibility in authentic social media contexts.

Technology Category

Application Category

📝 Abstract
Reducing the spread of misinformation is challenging. AI-based fact verification systems offer a promising solution by addressing the high costs and slow pace of traditional fact-checking. However, the problem of how to effectively communicate the results to users remains unsolved. Warning labels may seem an easy solution, but they fail to account for fuzzy misinformation that is not entirely fake. Additionally, users' limited attention spans and social media information should be taken into account while designing the presentation. The online experiment (n = 537) investigates the impact of sources and granularity on users' perception of information veracity and the system's usefulness and trustworthiness. Findings show that fine-grained indicators enhance nuanced opinions, information awareness, and the intention to use fact-checking systems. Source differences had minimal impact on opinions and perceptions, except for informativeness. Qualitative findings suggest the proposed indicators promote critical thinking. We discuss implications for designing concise, user-friendly AI fact-checking feedback.
Problem

Research questions and friction points this paper is trying to address.

Effective communication of AI-based fact verification results to users.
Addressing limitations of warning labels for fuzzy misinformation.
Designing user-friendly feedback for enhanced critical thinking.
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-based fact verification systems reduce misinformation spread
Fine-grained indicators enhance user trust and critical thinking
User-friendly feedback design improves fact-checking system adoption
Huiyun Tang
Huiyun Tang
University of Luxembourg
Human-computer interactionmisinformation
B
Björn Rohles
Digital Learning Hub.; Ministère de l’Éducation nationale, de l’Enfance et de la Jeunesse, Luxembourg, Esch-sur-Alzette, Luxembourg
Y
Yuwei Chuai
SnT, University of Luxembourg, Luxembourg, Luxembourg
Gabriele Lenzini
Gabriele Lenzini
Interdiscilplinary Centre for Security Reliability and Trust ( SNT) - University of Luxembourg
Sociotechnical Cybersecurity
Anastasia Sergeeva
Anastasia Sergeeva
University of Luxembourg
AI-mediated communicationconversational agentsmisinformationXR