🤖 AI Summary
This study addresses the weak interpretability and heavy reliance on labeled data in sensor-based Activities of Daily Living (ADL) recognition for smart homes. We propose a large language model (LLM)-driven zero-shot interpretable recognition framework. Methodologically, it integrates sensor time-series data with attribution-based XAI outputs (e.g., SHAP/LIME) to quantify feature importance, and employs structured prompt engineering to guide LLMs (e.g., GPT, Llama) in generating natural-language explanations. Our key contribution is the first systematic demonstration of LLMs’ dual role in ADL recognition: (1) zero-shot activity classification without labeled training data, and (2) semantic enrichment and human-readable enhancement of XAI outputs. Experiments show substantial improvements in explanation naturalness and user acceptability, while maintaining reasonable recognition accuracy—overcoming the rigidity and scalability limitations of rule-based explanations. However, the approach also exposes an inherent trade-off among accuracy, robustness, and computational overhead.
📝 Abstract
Explainable Artificial Intelligence (XAI) aims to uncover the inner reasoning of machine learning models. In IoT systems, XAI improves the transparency of models processing sensor data from multiple heterogeneous devices, ensuring end-users understand and trust their outputs. Among the many applications, XAI has also been applied to sensor-based Activities of Daily Living (ADLs) recognition in smart homes. Existing approaches highlight which sensor events are most important for each predicted activity, using simple rules to convert these events into natural language explanations for non-expert users. However, these methods produce rigid explanations lacking natural language flexibility and are not scalable. With the recent rise of Large Language Models (LLMs), it is worth exploring whether they can enhance explanation generation, considering their proven knowledge of human activities. This paper investigates potential approaches to combine XAI and LLMs for sensor-based ADL recognition. We evaluate if LLMs can be used: a) as explainable zero-shot ADL recognition models, avoiding costly labeled data collection, and b) to automate the generation of explanations for existing data-driven XAI approaches when training data is available and the goal is higher recognition rates. Our critical evaluation provides insights into the benefits and challenges of using LLMs for explainable ADL recognition.