🤖 AI Summary
This work addresses full test-time adaptation (FTTA) under source-free settings and pretrained protocols. We propose a “Discover-and-Unlearn” framework that explicitly models and eliminates domain-induced interference in both feature and label spaces, enabling adaptive generalization to unknown target-domain shifts. Our method simulates inter-domain interference via predefined mappings and imposes mutual information regularization on latent representations: it suppresses spurious feature variations in the feature space while enhancing predictive consistency in the label space. Extensive experiments across diverse style- and noise-induced distribution shifts demonstrate substantial improvements over existing FTTA methods, validating robust cross-domain generalization. The core innovation lies in decoupling interference modeling from interference removal—achieved without accessing source data or assuming knowledge of the pretraining protocol.
📝 Abstract
Fully Test-Time Adaptation (FTTA) addresses domain shifts without access to source data and training protocols of the pre-trained models. Traditional strategies that align source and target feature distributions are infeasible in FTTA due to the absence of training data and unpredictable target domains. In this work, we exploit a dual perspective on FTTA, and propose Agnostic FTTA (AFTTA) as a novel formulation that enables the usage of off-the-shelf domain transformations during test-time to enable direct generalization to unforeseeable target data. To address this, we develop an uncover-and-unlearn approach. First, we uncover potential unwanted shifts between source and target domains by simulating them through predefined mappings and consider them as nuisances. Then, during test-time prediction, the model is enforced to unlearn these nuisances by regularizing the consequent shifts in latent representations and label predictions. Specifically, a mutual information-based criterion is devised and applied to guide nuisances unlearning in the feature space and encourage confident and consistent prediction in label space. Our proposed approach explicitly addresses agnostic domain shifts, enabling superior model generalization under FTTA constraints. Extensive experiments on various tasks, involving corruption and style shifts, demonstrate that our method consistently outperforms existing approaches.