🤖 AI Summary
To address the low accuracy and high resource overhead of on-glass human activity recognition (HAR) using head-mounted IMUs, this paper proposes a hierarchical semi-supervised learning framework tailored for edge deployment. The framework leverages only high-level activity labels (e.g., drinking, speaking) to jointly learn transferable low-level motion representations (e.g., nodding, shaking head), enabling—for the first time—the direct deployment of a lightweight low-level encoder on commercial IMU chips. It integrates hierarchical neural architectures, IMU-specific temporal modeling, model compression, and edge-aware optimization. Evaluated on nine high-level and three low-level activities, the method achieves F1 scores of 0.826 and 0.855, respectively, with only 63K and 22K parameters—significantly reducing memory footprint, computational cost, and power consumption.
📝 Abstract
Human activity recognition (HAR) on smartglasses has various use cases, including health/fitness tracking and input for context-aware AI assistants. However, current approaches for egocentric activity recognition suffer from low performance or are resource-intensive. In this work, we introduce a resource (memory, compute, power, sample) efficient machine learning algorithm, EgoCHARM, for recognizing both high level and low level activities using a single egocentric (head-mounted) Inertial Measurement Unit (IMU). Our hierarchical algorithm employs a semi-supervised learning strategy, requiring primarily high level activity labels for training, to learn generalizable low level motion embeddings that can be effectively utilized for low level activity recognition. We evaluate our method on 9 high level and 3 low level activities achieving 0.826 and 0.855 F1 scores on high level and low level activity recognition respectively, with just 63k high level and 22k low level model parameters, allowing the low level encoder to be deployed directly on current IMU chips with compute. Lastly, we present results and insights from a sensitivity analysis and highlight the opportunities and limitations of activity recognition using egocentric IMUs.