SelfReplay: Adapting Self-Supervised Sensory Models via Adaptive Meta-Task Replay

πŸ“… 2024-03-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the significant performance degradation of self-supervised models when deployed across heterogeneous users due to domain shift, this paper proposes SelfReplayβ€”a few-shot domain adaptation framework. Methodologically, it introduces (1) self-supervised meta-learning pretraining to model cross-user transferable perceptual priors; (2) a dynamic self-supervised task replay mechanism on the user side, enabling personalized adaptation from only a few labeled samples; and (3) lightweight on-device fine-tuning with mobile sensor feature alignment. Evaluated on four benchmark datasets, SelfReplay achieves an average 8.8-percentage-point improvement in F1 score. On commercial smartphones, it completes adaptation within three minutes while incurring only 9.54% memory overhead. The framework bridges the gap between generic self-supervised representation learning and practical, resource-constrained edge deployment across diverse user populations.

Technology Category

Application Category

πŸ“ Abstract
Self-supervised learning has emerged as a method for utilizing massive unlabeled data for pre-training models, providing an effective feature extractor for various mobile sensing applications. However, when deployed to end-users, these models encounter significant domain shifts attributed to user diversity. We investigate the performance degradation that occurs when self-supervised models are fine-tuned in heterogeneous domains. To address the issue, we propose SelfReplay, a few-shot domain adaptation framework for personalizing self-supervised models. SelfReplay proposes self-supervised meta-learning for initial model pre-training, followed by a user-side model adaptation by replaying the self-supervision with user-specific data. This allows models to adjust their pre-trained representations to the user with only a few samples. Evaluation with four benchmarks demonstrates that SelfReplay outperforms existing baselines by an average F1-score of 8.8%p. Our on-device computational overhead analysis on a commodity off-the-shelf (COTS) smartphone shows that SelfReplay completes adaptation within an unobtrusive latency (in three minutes) with only a 9.54% memory consumption, demonstrating the computational efficiency of the proposed method.
Problem

Research questions and friction points this paper is trying to address.

Addresses domain shifts in self-supervised models due to user diversity.
Proposes SelfReplay for few-shot domain adaptation in mobile sensing.
Enhances model personalization with low computational overhead.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised meta-learning for model pre-training
User-specific data replay for model adaptation
Few-shot domain adaptation with low computational overhead