Choose Your Explanation: A Comparison of SHAP and GradCAM in Human Activity Recognition

📅 2024-12-20
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Human activity recognition (HAR), particularly in cerebral palsy clinical settings, demands high-confidence AI decisions underpinned by rigorous model interpretability. Method: This work presents the first quantitative and qualitative comparative study of SHAP and Grad-CAM applied to skeleton-based graph convolutional networks (GCNs) for HAR. Leveraging perturbation robustness analysis and a multi-metric explainability evaluation framework, we systematically assess both methods across fidelity, computational efficiency, and spatiotemporal visualization capability. Contribution/Results: We identify complementary strengths: SHAP excels in fine-grained feature attribution and importance ranking, whereas Grad-CAM offers superior computational efficiency and intuitive spatiotemporal activation mapping. Based on these findings, we propose an application-oriented XAI method selection guideline. Empirical validation confirms that synergistic integration of SHAP and Grad-CAM significantly enhances model transparency and clinical adoptability, establishing an evidence-based foundation and practical roadmap for medical-grade explainable AI.

Technology Category

Application Category

📝 Abstract
Explaining machine learning (ML) models using eXplainable AI (XAI) techniques has become essential to make them more transparent and trustworthy. This is especially important in high-stakes domains like healthcare, where understanding model decisions is critical to ensure ethical, sound, and trustworthy outcome predictions. However, users are often confused about which explanability method to choose for their specific use case. We present a comparative analysis of widely used explainability methods, Shapley Additive Explanations (SHAP) and Gradient-weighted Class Activation Mapping (GradCAM), within the domain of human activity recognition (HAR) utilizing graph convolutional networks (GCNs). By evaluating these methods on skeleton-based data from two real-world datasets, including a healthcare-critical cerebral palsy (CP) case, this study provides vital insights into both approaches' strengths, limitations, and differences, offering a roadmap for selecting the most appropriate explanation method based on specific models and applications. We quantitatively and quantitatively compare these methods, focusing on feature importance ranking, interpretability, and model sensitivity through perturbation experiments. While SHAP provides detailed input feature attribution, GradCAM delivers faster, spatially oriented explanations, making both methods complementary depending on the application's requirements. Given the importance of XAI in enhancing trust and transparency in ML models, particularly in sensitive environments like healthcare, our research demonstrates how SHAP and GradCAM could complement each other to provide more interpretable and actionable model explanations.
Problem

Research questions and friction points this paper is trying to address.

Comparing SHAP and GradCAM for explainable AI in activity recognition
Evaluating XAI methods for healthcare model transparency and trust
Guidelines for choosing explanation methods in human activity analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares SHAP and GradCAM for model explainability
Uses graph convolutional networks for activity recognition
Evaluates methods on healthcare-critical cerebral palsy data
🔎 Similar Papers
No similar papers found.
F
Felix Tempel
Faculty of Informatics, Norwegian University of Science and Technology
D
D. Groos
Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology
E
E. A. F. Ihlen
Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology
L
Lars Adde
Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Clinic of Rehabilitation, St. Olavs Hospital, Trondheim University Hospital
Inga Strümke
Inga Strümke
Norwegian University of Science and Technology
Explainable AI (XAI)Machine LearningBeyond Standard Model physicsSupersymmetry