🤖 AI Summary
To address performance degradation in continual test-time adaptation (CTTA) caused by error accumulation in pseudo-labels, this paper introduces conformal prediction into the CTTA framework for the first time, proposing a Conformal Uncertainty Indicator (CUI). CUI dynamically compensates for coverage decay induced by domain shift and jointly models domain discrepancy and sample-wise uncertainty to enable selective filtering and confidence-weighted adaptive updating of high-confidence pseudo-labels. Evaluated on multiple CTTA benchmarks, the method significantly enhances robustness: average accuracy improves by 3.2–7.8%, pseudo-label error rate decreases by 31%, and coverage deviation is constrained within ±0.5%. This work establishes a verifiable, uncertainty-aware adaptive paradigm for CTTA.
📝 Abstract
Continual Test-Time Adaptation (CTTA) aims to adapt models to sequentially changing domains during testing, relying on pseudo-labels for self-adaptation. However, incorrect pseudo-labels can accumulate, leading to performance degradation. To address this, we propose a Conformal Uncertainty Indicator (CUI) for CTTA, leveraging Conformal Prediction (CP) to generate prediction sets that include the true label with a specified coverage probability. Since domain shifts can lower the coverage than expected, making CP unreliable, we dynamically compensate for the coverage by measuring both domain and data differences. Reliable pseudo-labels from CP are then selectively utilized to enhance adaptation. Experiments confirm that CUI effectively estimates uncertainty and improves adaptation performance across various existing CTTA methods.