Explainable Human Activity Recognition: A Unified Review of Concepts and Mechanisms

📅 2026-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning models for human activity recognition (HAR) often lack interpretability, hindering their trustworthy deployment in high-stakes domains such as healthcare monitoring. This work proposes a unified perspective that decouples interpretability into conceptual dimensions and algorithmic mechanisms, introducing the first mechanism-centered taxonomy for explainable AI (XAI) in HAR. It systematically integrates XAI approaches across wearable, environmental, physiological, and multimodal sensing modalities, encompassing dominant paradigms such as feature importance, attention mechanisms, and post-hoc explanations. By clarifying ambiguities in existing literature and critically examining challenges related to temporality, multimodality, and semantic complexity, the study delineates the explanatory objectives, applicable contexts, and limitations of current methods, reviews evaluation practices, and charts a path toward reliable, deployable, and human-centered explainable HAR systems.

Technology Category

Application Category

📝 Abstract
Human activity recognition (HAR) has become a key component of intelligent systems for healthcare monitoring, assistive living, smart environments, and human-computer interaction. Although deep learning has substantially improved HAR performance on multivariate sensor data, the resulting models often remain opaque, limiting trust, reliability, and real-world deployment. Explainable artificial intelligence (XAI) has therefore emerged as a critical direction for making HAR systems more transparent and human-centered. This paper presents a comprehensive review of explainable HAR methods across wearable, ambient, physiological, and multimodal sensing settings. We introduce a unified perspective that separates conceptual dimensions of explainability from algorithmic explanation mechanisms, reducing ambiguities in prior surveys. Building on this distinction, we present a mechanism-centric taxonomy of XAI-HAR methods covering major explanation paradigms. The review examines how these methods address the temporal, multimodal, and semantic complexities of HAR, and summarize their interpretability objectives, explanation targets, and limitations. In addition, we discuss current evaluation practices, highlight key challenges in achieving reliable and deployable XAI-HAR, and outline directions toward trustworthy activity recognition systems that better support human understanding and decision-making.
Problem

Research questions and friction points this paper is trying to address.

Explainable Artificial Intelligence
Human Activity Recognition
Model Transparency
Interpretability
Trustworthy AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Human Activity Recognition
Mechanism-centric taxonomy
Multimodal sensing
Interpretability
🔎 Similar Papers
No similar papers found.
M
Mainak Kundu
Department of Electrical Engineering, University of South Florida, Tampa, FL 33620 USA
C
Catherine Chen
Department of Computer Engineering, California Polytechnic State University, San Luis Obispo, CA 93407 USA
Rifatul Islam
Rifatul Islam
Assistant Professor of Computer Science, Kennesaw State University
AR/VRCybersicknessAIMultimodal LearningHuman-Centered Computing
I
Ismail Uysal
Department of Electrical Engineering, University of South Florida, Tampa, FL 33620 USA
R
Ria Kanjilal
Department of Computer Engineering, California Polytechnic State University, San Luis Obispo, CA 93407 USA