🤖 AI Summary
This work addresses the failure of test-time adaptation (TTA) on self-supervised learning (SSL) models with low source-domain accuracy, proposing the first TTA protocol and collaborative learning framework tailored for source-free pretraining scenarios. Methodologically, it integrates contrastive learning and knowledge distillation to establish an online TTA-driven mechanism for progressive representation optimization, compatible with diverse SSL backbones including DINO, MoCo, and iBOT. Its core innovation lies in eliminating reliance on source-domain supervised fine-tuning, enabling purely test-phase adaptation. Extensive evaluation across standard TTA benchmarks demonstrates that the method significantly enhances the generalization performance of various SSL models. Notably, it achieves competitive accuracy—comparable to supervised TTA approaches—without access to any source-domain labels or fine-tuning.
📝 Abstract
Training on test-time data enables deep learning models to adapt to dynamic environmental changes, enhancing their practical applicability. Online adaptation from source to target domains is promising but it remains highly reliant on the performance of source pretrained model. In this paper, we investigate whether test-time adaptation (TTA) methods can continuously improve models trained via self-supervised learning (SSL) without relying on source pretraining. We introduce a self-supervised TTA protocol after observing that existing TTA approaches struggle when directly applied to self-supervised models with low accuracy on the source domain. Furthermore, we propose a collaborative learning framework that integrates SSL and TTA models, leveraging contrastive learning and knowledge distillation for stepwise representation refinement. We validate our method on diverse self-supervised models, including DINO, MoCo, and iBOT, across TTA benchmarks. Extensive experiments validate the effectiveness of our approach in SSL, showing that it achieves competitive performance even without source pretraining.