When and Where to Reset Matters for Long-Term Test-Time Adaptation

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of model collapse in long-term test-time adaptation, where models often fail catastrophically due to error accumulation and converge to predicting only a few classes. To mitigate this, the authors propose an Adaptive Selective Reset (ASR) mechanism that dynamically determines when and where to reset model parameters, avoiding the catastrophic forgetting induced by periodic full resets. ASR is integrated with importance-aware regularization and online adaptation tuning to effectively alleviate knowledge forgetting. The method significantly outperforms existing approaches across multiple long-term test-time adaptation benchmarks, demonstrating particularly strong performance under severe domain shifts. Notably, it introduces the first localized reset strategy triggered explicitly by estimated collapse risk, enabling more stable and robust adaptation over extended deployment periods.

Technology Category

Application Category

📝 Abstract
When continual test-time adaptation (TTA) persists over the long term, errors accumulate in the model and further cause it to predict only a few classes for all inputs, a phenomenon known as model collapse. Recent studies have explored reset strategies that completely erase these accumulated errors. However, their periodic resets lead to suboptimal adaptation, as they occur independently of the actual risk of collapse. Moreover, their full resets cause catastrophic loss of knowledge acquired over time, even though such knowledge could be beneficial in the future. To this end, we propose (1) an Adaptive and Selective Reset (ASR) scheme that dynamically determines when and where to reset, (2) an importance-aware regularizer to recover essential knowledge lost due to reset, and (3) an on-the-fly adaptation adjustment scheme to enhance adaptability under challenging domain shifts. Extensive experiments across long-term TTA benchmarks demonstrate the effectiveness of our approach, particularly under challenging conditions. Our code is available at https://github.com/YonseiML/asr.
Problem

Research questions and friction points this paper is trying to address.

test-time adaptation
model collapse
error accumulation
knowledge loss
long-term adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive and Selective Reset
test-time adaptation
model collapse
importance-aware regularization
domain shift
🔎 Similar Papers
No similar papers found.