When Test-Time Adaptation Meets Self-Supervised Models

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the failure of test-time adaptation (TTA) on self-supervised learning (SSL) models with low source-domain accuracy, proposing the first TTA protocol and collaborative learning framework tailored for source-free pretraining scenarios. Methodologically, it integrates contrastive learning and knowledge distillation to establish an online TTA-driven mechanism for progressive representation optimization, compatible with diverse SSL backbones including DINO, MoCo, and iBOT. Its core innovation lies in eliminating reliance on source-domain supervised fine-tuning, enabling purely test-phase adaptation. Extensive evaluation across standard TTA benchmarks demonstrates that the method significantly enhances the generalization performance of various SSL models. Notably, it achieves competitive accuracy—comparable to supervised TTA approaches—without access to any source-domain labels or fine-tuning.

Technology Category

Application Category

📝 Abstract
Training on test-time data enables deep learning models to adapt to dynamic environmental changes, enhancing their practical applicability. Online adaptation from source to target domains is promising but it remains highly reliant on the performance of source pretrained model. In this paper, we investigate whether test-time adaptation (TTA) methods can continuously improve models trained via self-supervised learning (SSL) without relying on source pretraining. We introduce a self-supervised TTA protocol after observing that existing TTA approaches struggle when directly applied to self-supervised models with low accuracy on the source domain. Furthermore, we propose a collaborative learning framework that integrates SSL and TTA models, leveraging contrastive learning and knowledge distillation for stepwise representation refinement. We validate our method on diverse self-supervised models, including DINO, MoCo, and iBOT, across TTA benchmarks. Extensive experiments validate the effectiveness of our approach in SSL, showing that it achieves competitive performance even without source pretraining.
Problem

Research questions and friction points this paper is trying to address.

Adapting self-supervised models during test-time without source pretraining
Improving low-accuracy self-supervised models via test-time adaptation
Integrating SSL and TTA for stepwise representation refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised TTA protocol for model adaptation
Collaborative learning integrates SSL and TTA models
Contrastive learning and knowledge distillation refine representations
🔎 Similar Papers
No similar papers found.