🤖 AI Summary
Current clinical decision support systems lack rigorous human-centered evaluation frameworks for explainable artificial intelligence (XAI), hindering the translation of theoretical interpretability into real-world clinical utility.
Method: We propose the first XAI human-centered evaluation framework tailored to clinical stakeholders, derived from a systematic literature review integrating XAI methodologies (e.g., saliency maps, counterfactual generation, surrogate models), human factors assessment paradigms (usability testing, cognitive interviewing, mixed-reality experiments), and clinical adoption models (UTAUT/CFIR).
Contribution: The framework establishes a three-tier taxonomy spanning clinical workflow integration, trust dimensions, and decision impact. It identifies seven core adoption barriers—including explanation latency, terminology mismatch, and accountability ambiguity—and distills twelve high-feasibility evaluation metrics. Furthermore, it provides a methodological foundation for human-centered validation aligned with FDA and CE regulatory requirements, bridging the gap between algorithmic explainability and clinical effectiveness.
📝 Abstract
Explainable AI (XAI) has become a crucial component of Clinical Decision Support Systems (CDSS) to enhance transparency, trust, and clinical adoption. However, while many XAI methods have been proposed, their effectiveness in real-world medical settings remains underexplored. This paper provides a survey of human-centered evaluations of Explainable AI methods in Clinical Decision Support Systems. By categorizing existing works based on XAI methodologies, evaluation frameworks, and clinical adoption challenges, we offer a structured understanding of the landscape. Our findings reveal key challenges in the integration of XAI into healthcare workflows and propose a structured framework to align the evaluation methods of XAI with the clinical needs of stakeholders.