Subject Invariant Contrastive Learning for Human Activity Recognition

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Domain shift caused by subject variability severely hampers generalization in human activity recognition (HAR). Existing contrastive learning approaches often capture subject-specific features, undermining cross-subject transferability. To address this, we propose Subject-Invariant Contrastive Learning (SICL), a novel self-supervised framework that explicitly suppresses subject bias by reweighting negative sample pairs from the same subject, thereby enhancing activity-specific representation learning. SICL is modular and compatible with multimodal sensor inputs (e.g., IMU, RGB, depth) and diverse learning paradigms (e.g., supervised, semi-supervised, domain-generalized settings). Evaluated on three benchmark datasets—UTD-MHAD, MMAct, and DARai—SICL achieves up to 11% absolute improvement in unseen-subject recognition accuracy over state-of-the-art domain generalization and contrastive learning methods. Comprehensive experiments demonstrate its effectiveness, robustness across modalities and domains, and broad applicability to HAR tasks under distributional shifts.

Technology Category

Application Category

📝 Abstract
The high cost of annotating data makes self-supervised approaches, such as contrastive learning methods, appealing for Human Activity Recognition (HAR). Effective contrastive learning relies on selecting informative positive and negative samples. However, HAR sensor signals are subject to significant domain shifts caused by subject variability. These domain shifts hinder model generalization to unseen subjects by embedding subject-specific variations rather than activity-specific features. As a result, human activity recognition models trained with contrastive learning often struggle to generalize to new subjects. We introduce Subject-Invariant Contrastive Learning (SICL), a simple yet effective loss function to improve generalization in human activity recognition. SICL re-weights negative pairs drawn from the same subject to suppress subject-specific cues and emphasize activity-specific information. We evaluate our loss function on three public benchmarks: UTD-MHAD, MMAct, and DARai. We show that SICL improves performance by up to 11% over traditional contrastive learning methods. Additionally, we demonstrate the adaptability of our loss function across various settings, including multiple self-supervised methods, multimodal scenarios, and supervised learning frameworks.
Problem

Research questions and friction points this paper is trying to address.

Address subject variability in HAR contrastive learning
Improve model generalization to unseen subjects
Suppress subject-specific cues in activity recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Subject-Invariant Contrastive Learning loss function
Re-weights negative same-subject pairs
Suppresses subject-specific cues
🔎 Similar Papers
No similar papers found.