🤖 AI Summary
Existing eXplainable AI (XAI) methods predominantly focus on content-based explanations of misinformation (e.g., linguistic features), neglecting its socio-contextual dimensions. Method: This paper introduces *social explanations*—incorporating sociological factors such as information diffusion pathways and source credibility—and formalizes *explanatory alignment* between content and social explanations. We systematically evaluate the impact of content-only, social-only, and combined (aligned vs. misaligned) explanations on users’ misinformation detection performance via large-scale online experiments across COVID-19 and political domains on Prolific and MTurk. Results: (1) Social and combined explanations significantly improve detection accuracy; (2) explanatory alignment serves as a critical moderating factor; (3) explanation ordering exhibits domain-specific effects—e.g., presenting social explanations before content yields superior performance in the political domain. This work advances XAI for misinformation detection by shifting from content-centric to socially aware, fine-grained explainability design.
📝 Abstract
In this paper, we study the problem of AI explanation of misinformation, where the goal is to identify explanation designs that help improve users' misinformation detection abilities and their overall user experiences. Our work is motivated by the limitations of current Explainable AI (XAI) approaches, which predominantly focus on content explanations that elucidate the linguistic features and sentence structures of the misinformation. To address this limitation, we explore various explanations beyond content explanation, such as "social explanation" that considers the broader social context surrounding misinformation, as well as a "combined explanation" where both the content and social explanations are presented in scenarios that are either aligned or misaligned with each other. To evaluate the comparative effectiveness of these AI explanations, we conduct two online crowdsourcing experiments in the COVID-19 (Study 1 on Prolific) and Politics domains (Study 2 on MTurk). Our results show that AI explanations are generally effective in aiding users to detect misinformation, with effectiveness significantly influenced by the alignment between content and social explanations. We also find that the order in which explanation types are presented - specifically, whether a content or social explanation comes first - can influence detection accuracy, with differences found between the COVID-19 and Political domains. This work contributes towards more effective design of AI explanations, fostering a deeper understanding of how different explanation types and their combinations influence misinformation detection.