🤖 AI Summary
This study addresses the bottleneck in smart home activity recognition—its heavy reliance on large-scale labeled sensor data—for Activities of Daily Living (ADLs). We propose the first zero-shot sensor-data recognition framework leveraging Large Language Models (LLMs). Our method converts raw, heterogeneous sensor time-series into natural language descriptions and employs prompt engineering to guide the LLM in direct ADL classification, supporting both zero-shot and few-shot prompting while enabling cross-modal semantic alignment. Experiments on two public ADL datasets demonstrate that our approach achieves performance comparable to supervised deep learning models under zero-shot settings, drastically reducing annotation costs. Notably, this work provides the first empirical validation of LLMs’ effectiveness for purely sensor-driven, domain-unfine-tuned ADL recognition—establishing a novel paradigm for low-resource intelligent environment perception.
📝 Abstract
The sensor-based recognition of Activities of Daily Living (ADLs) in smart home environments enables several applications in the areas of energy management, safety, well-being, and healthcare. ADLs recognition is typically based on deep learning methods requiring large datasets to be trained. Recently, several studies proved that Large Language Models (LLMs) effectively capture common-sense knowledge about human activities. However, the effectiveness of LLMs for ADLs recognition in smart home environments still deserves to be investigated. In this work, we propose ADL-LLM, a novel LLM-based ADLs recognition system. ADLLLM transforms raw sensor data into textual representations, that are processed by an LLM to perform zero-shot ADLs recognition. Moreover, in the scenario where a small labeled dataset is available, ADL-LLM can also be empowered with few-shot prompting. We evaluated ADL-LLM on two public datasets, showing its effectiveness in this domain.