DiCoTTA: Domain-invariant Learning for Continual Test-time Adaptation

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in continual test-time adaptation (CTTA) where models struggle to simultaneously achieve online adaptation to the current test domain and generalization to future unseen domains. We propose the first online domain-invariant learning framework for CTTA. Our method jointly disentangles feature representations across current and historical test domains, preserving semantic integrity while enabling knowledge retention. It incorporates a lightweight model architecture, a domain-invariant test-time optimization strategy, an incremental domain memory module, and a dual-objective online optimization algorithm. Evaluated on four standard CTTA benchmarks, our approach achieves state-of-the-art performance, significantly improving generalization to unseen test domains, robustness to distribution shifts, and adaptation stability—overcoming the fundamental limitation of conventional CTTA methods that adapt exclusively to the current domain.

Technology Category

Application Category

📝 Abstract
This paper studies continual test-time adaptation (CTTA), the task of adapting a model to constantly changing unseen domains in testing while preserving previously learned knowledge. Existing CTTA methods mostly focus on adaptation to the current test domain only, overlooking generalization to arbitrary test domains a model may face in the future. To tackle this limitation, we present a novel online domain-invariant learning framework for CTTA, dubbed DiCoTTA. DiCoTTA aims to learn feature representation to be invariant to both current and previous test domains on the fly during testing. To this end, we propose a new model architecture and a test-time adaptation strategy dedicated to learning domain-invariant features without corrupting semantic contents, along with a new data structure and optimization algorithm for effectively managing information from previous test domains. DiCoTTA achieved state-of-the-art performance on four public CTTA benchmarks. Moreover, it showed superior generalization to unseen test domains.
Problem

Research questions and friction points this paper is trying to address.

Adapting models to constantly changing unseen test domains
Preserving learned knowledge while adapting to new domains
Ensuring generalization to arbitrary future test domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online domain-invariant learning framework
Model architecture for invariant features
Optimization algorithm managing previous domains
🔎 Similar Papers
No similar papers found.