Explaining Human Activity Recognition with SHAP: Validating Insights with Perturbation and Quantitative Measures

📅 2024-11-06
🏛️ Comput. Biol. Medicine
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient interpretability of skeleton-based human activity recognition (HAR) models, this paper proposes the first interpretability validation framework integrating SHAP attribution analysis, controllable temporal perturbation, and multi-dimensional fidelity quantification. Methodologically, it couples sliding-window CNN/LSTM architectures with skeleton-specific joint perturbation strategies and systematically evaluates explanation reliability using deletion/insertion curves and Spearman rank correlation. The key contribution lies in the first deep integration of SHAP explanations with dynamic perturbation experiments and quantitative validation—overcoming the limitations of conventional qualitative visualizations. On benchmarks including UCI HAR, SHAP-based explanations achieve an average ΔAUC < 0.08, significantly outperforming LIME and Grad-CAM. This establishes a verifiable interpretability paradigm for trustworthy HAR model deployment in high-stakes applications.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Explaining GCN decisions in HAR using SHAP for interpretability
Validating SHAP insights via perturbation on key body points
Assessing impact of influential features on model performance metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses SHAP to explain GCN decisions
Introduces novel perturbation for validation
Targets key body points for fidelity
🔎 Similar Papers
No similar papers found.
F
Felix Tempel
Faculty of Informatics, Norwegian University of Science and Technology
E
E. A. F. Ihlen
Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology
L
Lars Adde
Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Clinic of Rehabilitation, St. Olavs Hospital, Trondheim University Hospital
Inga Strümke
Inga Strümke
Norwegian University of Science and Technology
Explainable AI (XAI)Machine LearningBeyond Standard Model physicsSupersymmetry