🤖 AI Summary
Automated fact-checking systems often generate explanations misaligned with human fact-checkers’ decision-making logic, undermining trust and collaboration. Method: Through in-depth interviews with 22 professional fact-checkers and qualitative thematic analysis, we elicited domain-expert requirements for explainability. Contribution/Results: We propose the first explainability framework tailored to fact-checking, grounded in three essential, empirically derived explanation components—explicit reasoning paths, evidence anchors, and uncertainty quantification—ensuring reproducibility, traceability, and verifiability. The framework identifies six critical explanation needs consistently unmet by existing tools. It provides both a theoretical foundation and practical design guidelines for developing and evaluating explainable AI (XAI) systems in fact-checking contexts, thereby significantly enhancing human-AI collaboration efficacy.
📝 Abstract
The pervasiveness of large language models and generative AI in online media has amplified the need for effective automated fact-checking to assist fact-checkers in tackling the increasing volume and sophistication of misinformation. The complex nature of fact-checking demands that automated fact-checking systems provide explanations that enable fact-checkers to scrutinise their outputs. However, it is unclear how these explanations should align with the decision-making and reasoning processes of fact-checkers to be effectively integrated into their workflows. Through semi-structured interviews with fact-checking professionals, we bridge this gap by: (i) providing an account of how fact-checkers assess evidence, make decisions, and explain their processes; (ii) examining how fact-checkers use automated tools in practice; and (iii) identifying fact-checker explanation requirements for automated fact-checking tools. The findings show unmet explanation needs and identify important criteria for replicable fact-checking explanations that trace the model's reasoning path, reference specific evidence, and highlight uncertainty and information gaps.