Uncover and Unlearn Nuisances: Agnostic Fully Test-Time Adaptation

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses full test-time adaptation (FTTA) under source-free settings and pretrained protocols. We propose a “Discover-and-Unlearn” framework that explicitly models and eliminates domain-induced interference in both feature and label spaces, enabling adaptive generalization to unknown target-domain shifts. Our method simulates inter-domain interference via predefined mappings and imposes mutual information regularization on latent representations: it suppresses spurious feature variations in the feature space while enhancing predictive consistency in the label space. Extensive experiments across diverse style- and noise-induced distribution shifts demonstrate substantial improvements over existing FTTA methods, validating robust cross-domain generalization. The core innovation lies in decoupling interference modeling from interference removal—achieved without accessing source data or assuming knowledge of the pretraining protocol.

Technology Category

Application Category

📝 Abstract
Fully Test-Time Adaptation (FTTA) addresses domain shifts without access to source data and training protocols of the pre-trained models. Traditional strategies that align source and target feature distributions are infeasible in FTTA due to the absence of training data and unpredictable target domains. In this work, we exploit a dual perspective on FTTA, and propose Agnostic FTTA (AFTTA) as a novel formulation that enables the usage of off-the-shelf domain transformations during test-time to enable direct generalization to unforeseeable target data. To address this, we develop an uncover-and-unlearn approach. First, we uncover potential unwanted shifts between source and target domains by simulating them through predefined mappings and consider them as nuisances. Then, during test-time prediction, the model is enforced to unlearn these nuisances by regularizing the consequent shifts in latent representations and label predictions. Specifically, a mutual information-based criterion is devised and applied to guide nuisances unlearning in the feature space and encourage confident and consistent prediction in label space. Our proposed approach explicitly addresses agnostic domain shifts, enabling superior model generalization under FTTA constraints. Extensive experiments on various tasks, involving corruption and style shifts, demonstrate that our method consistently outperforms existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Addresses domain shifts without source data during test-time adaptation
Uncovers and unlearns nuisances through simulated domain transformations
Enhances generalization to unforeseeable target domains using mutual information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agnostic FTTA formulation using off-the-shelf domain transformations
Uncover-and-unlearn approach simulating shifts as nuisances
Mutual information regularization for feature and label consistency
🔎 Similar Papers
No similar papers found.
P
Ponhvoan Srey
College of Computing and Data Science, Nanyang Technological University (NTU), 639798, Singapore, Singapore
Y
Yaxin Shi
Centre for Frontier AI Research, Agency for Science, Technology and Research (A*STAR), 138632, Singapore, Singapore
Hangwei Qian
Hangwei Qian
CFAR, A*STAR, Singapore | Lund University | NTU | USTC
Artificial IntelligenceTrustworthy AITransfer LearningTime SeriesAI for Science
J
Jing Li
Centre for Frontier AI Research, Agency for Science, Technology and Research (A*STAR), 138632, Singapore, Singapore
I
Ivor W. Tsang
Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), 138632, Singapore, Singapore