π€ AI Summary
To address the prohibitively high computational cost of full-parameter fine-tuning large models for Human Activity Recognition (HAR) in resource-constrained settings, this paper proposes a lightweight adaptation framework based on Masked Autoencoders, pioneering the systematic integration of LoRA and QLoRA into HAR. Methodologically, it synergistically combines low-rank adaptation, weight quantization, and a self-supervised backbone, validated via Leave-One-Dataset-Out cross-dataset evaluation. Key contributions include: (1) revealing the tunable trade-off between accuracy and efficiency governed by adaptation rank; (2) achieving performance on par with full fine-tuning across five public HAR benchmarks; and (3) reducing trainable parameters by 93%, GPU memory consumption by 68%, and training time by 57%, while maintaining robustness under low-supervision regimes.
π Abstract
Human Activity Recognition is a foundational task in pervasive computing. While recent advances in self-supervised learning and transformer-based architectures have significantly improved HAR performance, adapting large pretrained models to new domains remains a practical challenge due to limited computational resources on target devices. This papers investigates parameter-efficient fine-tuning techniques, specifically Low-Rank Adaptation (LoRA) and Quantized LoRA, as scalable alternatives to full model fine-tuning for HAR. We propose an adaptation framework built upon a Masked Autoencoder backbone and evaluate its performance under a Leave-One-Dataset-Out validation protocol across five open HAR datasets. Our experiments demonstrate that both LoRA and QLoRA can match the recognition performance of full fine-tuning while significantly reducing the number of trainable parameters, memory usage, and training time. Further analyses reveal that LoRA maintains robust performance even under limited supervision and that the adapter rank provides a controllable trade-off between accuracy and efficiency. QLoRA extends these benefits by reducing the memory footprint of frozen weights through quantization, with minimal impact on classification quality.