🤖 AI Summary
Deep learning models for human activity recognition (HAR) often lack interpretability, hindering their trustworthy deployment in high-stakes domains such as healthcare monitoring. This work proposes a unified perspective that decouples interpretability into conceptual dimensions and algorithmic mechanisms, introducing the first mechanism-centered taxonomy for explainable AI (XAI) in HAR. It systematically integrates XAI approaches across wearable, environmental, physiological, and multimodal sensing modalities, encompassing dominant paradigms such as feature importance, attention mechanisms, and post-hoc explanations. By clarifying ambiguities in existing literature and critically examining challenges related to temporality, multimodality, and semantic complexity, the study delineates the explanatory objectives, applicable contexts, and limitations of current methods, reviews evaluation practices, and charts a path toward reliable, deployable, and human-centered explainable HAR systems.
📝 Abstract
Human activity recognition (HAR) has become a key component of intelligent systems for healthcare monitoring, assistive living, smart environments, and human-computer interaction. Although deep learning has substantially improved HAR performance on multivariate sensor data, the resulting models often remain opaque, limiting trust, reliability, and real-world deployment. Explainable artificial intelligence (XAI) has therefore emerged as a critical direction for making HAR systems more transparent and human-centered. This paper presents a comprehensive review of explainable HAR methods across wearable, ambient, physiological, and multimodal sensing settings. We introduce a unified perspective that separates conceptual dimensions of explainability from algorithmic explanation mechanisms, reducing ambiguities in prior surveys. Building on this distinction, we present a mechanism-centric taxonomy of XAI-HAR methods covering major explanation paradigms. The review examines how these methods address the temporal, multimodal, and semantic complexities of HAR, and summarize their interpretability objectives, explanation targets, and limitations. In addition, we discuss current evaluation practices, highlight key challenges in achieving reliable and deployable XAI-HAR, and outline directions toward trustworthy activity recognition systems that better support human understanding and decision-making.