A Survey on Human-Centered Evaluation of Explainable AI Methods in Clinical Decision Support Systems

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current clinical decision support systems lack rigorous human-centered evaluation frameworks for explainable artificial intelligence (XAI), hindering the translation of theoretical interpretability into real-world clinical utility. Method: We propose the first XAI human-centered evaluation framework tailored to clinical stakeholders, derived from a systematic literature review integrating XAI methodologies (e.g., saliency maps, counterfactual generation, surrogate models), human factors assessment paradigms (usability testing, cognitive interviewing, mixed-reality experiments), and clinical adoption models (UTAUT/CFIR). Contribution: The framework establishes a three-tier taxonomy spanning clinical workflow integration, trust dimensions, and decision impact. It identifies seven core adoption barriers—including explanation latency, terminology mismatch, and accountability ambiguity—and distills twelve high-feasibility evaluation metrics. Furthermore, it provides a methodological foundation for human-centered validation aligned with FDA and CE regulatory requirements, bridging the gap between algorithmic explainability and clinical effectiveness.

Technology Category

Application Category

📝 Abstract
Explainable AI (XAI) has become a crucial component of Clinical Decision Support Systems (CDSS) to enhance transparency, trust, and clinical adoption. However, while many XAI methods have been proposed, their effectiveness in real-world medical settings remains underexplored. This paper provides a survey of human-centered evaluations of Explainable AI methods in Clinical Decision Support Systems. By categorizing existing works based on XAI methodologies, evaluation frameworks, and clinical adoption challenges, we offer a structured understanding of the landscape. Our findings reveal key challenges in the integration of XAI into healthcare workflows and propose a structured framework to align the evaluation methods of XAI with the clinical needs of stakeholders.
Problem

Research questions and friction points this paper is trying to address.

Evaluate Explainable AI in clinical systems
Assess XAI effectiveness in medical settings
Align XAI evaluation with clinical needs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-centered XAI evaluation
Structured XAI framework
Clinical adoption challenges
🔎 Similar Papers
No similar papers found.