🤖 AI Summary
Cross-dataset human activity recognition (HAR) faces significant challenges due to distribution shifts arising from device heterogeneity, sensor placement variability, and the emergence of unseen activities. To address these issues, we propose LanHAR—the first language-centric HAR framework. LanHAR leverages large language models (LLMs) to generate semantic interpretations of IMU time-series data and activity labels; it employs iterative re-generation and a two-stage semantic alignment training strategy to achieve robust cross-device representation learning and zero-shot recognition of novel activities. Additionally, we design a lightweight sensor encoder (<1.2M parameters) enabling efficient on-device deployment. Evaluated on five public benchmarks, LanHAR achieves an average 9.2% improvement in cross-dataset accuracy and a 14.7% gain in F1-score for unseen activities—substantially outperforming state-of-the-art methods.
📝 Abstract
Human Activity Recognition (HAR) using Inertial Measurement Unit (IMU) sensors is critical for applications in healthcare, safety, and industrial production. However, variations in activity patterns, device types, and sensor placements create distribution gaps across datasets, reducing the performance of HAR models. To address this, we propose LanHAR, a novel system that leverages Large Language Models (LLMs) to generate semantic interpretations of sensor readings and activity labels for cross-dataset HAR. This approach not only mitigates cross-dataset heterogeneity but also enhances the recognition of new activities. LanHAR employs an iterative re-generation method to produce high-quality semantic interpretations with LLMs and a two-stage training framework that bridges the semantic interpretations of sensor readings and activity labels. This ultimately leads to a lightweight sensor encoder suitable for mobile deployment, enabling any sensor reading to be mapped into the semantic interpretation space. Experiments on five public datasets demonstrate that our approach significantly outperforms state-of-the-art methods in both cross-dataset HAR and new activity recognition. The source code will be made publicly available.