🤖 AI Summary
This work addresses the degradation of domain adaptation performance in continuously tested models operating in long-term non-stationary environments. To mitigate this issue, the authors propose an Adaptive and Balanced Reinitialization (ABR) mechanism that dynamically links label flipping trajectory patterns with the timing of model weight reinitialization. By monitoring label flipping behavior and adaptively adjusting the reinitialization interval, ABR sustains stable adaptation capabilities over extended periods. Extensive evaluations on multiple Continual Test-Time Adaptation (CTTA) benchmarks demonstrate that ABR significantly outperforms existing methods, effectively enhancing both robustness and accuracy in non-stationary settings.
📝 Abstract
Continual test-time domain adaptation (CTTA) aims to adjust models so that they can perform well over time across non-stationary environments. While previous methods have made considerable efforts to optimize the adaptation process, a crucial question remains: Can the model adapt to continually changing environments over a long time? In this work, we explore facilitating better CTTA in the long run using a re-initialization (or reset) based method. First, we observe that the long-term performance is associated with the trajectory pattern in label flip. Based on this observed correlation, we propose a simple yet effective policy, Adaptive-and-Balanced Re-initialization (ABR), towards preserving the model's long-term performance. In particular, ABR performs weight re-initialization using adaptive intervals. The adaptive interval is determined based on the change in label flip. The proposed method is validated on extensive CTTA benchmarks, achieving superior performance.