Explainable AI Using Inherently Interpretable Components for Wearable-based Health Monitoring

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel intrinsically interpretable AI method tailored for time-series health data from wearable devices, which exhibit strong temporal dependencies. Addressing the common trade-off between model performance and interpretability in conventional explainable AI approaches, the study introduces—in the context of wearable health monitoring—Intrinsically Interpretable Components (IICs) grounded in domain knowledge. These IICs form a medically informed set of concepts that enable intuitive, concept-level explanations within a custom-designed explanation space. Evaluated on real-world tasks such as seizure detection and state assessment, the proposed method achieves both high predictive accuracy and highly trustworthy, human-understandable explanations, effectively reconciling model performance with interpretability without compromising either.

Technology Category

Application Category

📝 Abstract
The use of wearables in medicine and wellness, enabled by AI-based models, offers tremendous potential for real-time monitoring and interpretable event detection. Explainable AI (XAI) is required to assess what models have learned and build trust in model outputs, for patients, healthcare professionals, model developers, and domain experts alike. Explaining AI decisions made on time-series data recorded by wearables is especially challenging due to the data's complex nature and temporal dependencies. Too often, explainability using interpretable features leads to performance loss. We propose a novel XAI method that combines explanation spaces and concept-based explanations to explain AI predictions on time-series data. By using Inherently Interpretable Components (IICs), which encapsulate domain-specific, interpretable concepts within a custom explanation space, we preserve the performance of models trained on time series while achieving the interpretability of concept-based explanations based on extracted features. Furthermore, we define a domain-specific set of IICs for wearable-based health monitoring and demonstrate their usability in real applications, including state assessment and epileptic seizure detection.
Problem

Research questions and friction points this paper is trying to address.

Explainable AI
Wearable-based Health Monitoring
Time-series Data
Interpretability
Inherently Interpretable Components
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Inherently Interpretable Components
Time-series data
Wearable health monitoring
Concept-based explanation
🔎 Similar Papers
No similar papers found.
Maurice Kuschel
Maurice Kuschel
Signal and System Theory Group, Paderborn University
Medical Data ScienceExplainable AI
S
Solveig Vieluf
Department of Medicine I, Ludwig Maximilian University, Munich, Germany; Konrad Zuse School of Excellence in Reliable AI, Munich, Germany
C
Claus Reinsberger
Institute of Sports Medicine, Paderborn University, Paderborn, Germany; Division of Sports Neurology and Neurosciences, Mass General Brigham, Boston, USA
Tobias Loddenkemper
Tobias Loddenkemper
Professor, Harvard Medical School
Epilepsy
T
Tanuj Hasija
Signal and System Theory Group, Paderborn University, Paderborn, Germany