🤖 AI Summary
Test-time adaptation (TTA) often impairs prediction uncertainty calibration, hindering its deployment in high-stakes applications such as autonomous driving and healthcare. Existing calibration methods rely on static distribution assumptions and thus fail under dynamic test-time shifts. To address this, we propose the first TTA calibration framework grounded in *style invariance*, modeling instance-level correctness likelihood via consistency among style-transformed samples generated forwardly. This yields a gradient-free, plug-and-play calibration signal—requiring neither backpropagation nor model fine-tuning. Our method is agnostic to both TTA algorithms and model architectures. Extensive experiments across four benchmarks, five TTA methods, and three neural network families demonstrate an average 13-percentage-point reduction in Expected Calibration Error (ECE), significantly outperforming conventional calibration approaches.
📝 Abstract
Test-time adaptation (TTA) enables efficient adaptation of deployed models, yet it often leads to poorly calibrated predictive uncertainty - a critical issue in high-stakes domains such as autonomous driving, finance, and healthcare. Existing calibration methods typically assume fixed models or static distributions, resulting in degraded performance under real-world, dynamic test conditions. To address these challenges, we introduce Style Invariance as a Correctness Likelihood (SICL), a framework that leverages style-invariance for robust uncertainty estimation. SICL estimates instance-wise correctness likelihood by measuring prediction consistency across style-altered variants, requiring only the model's forward pass. This makes it a plug-and-play, backpropagation-free calibration module compatible with any TTA method. Comprehensive evaluations across four baselines, five TTA methods, and two realistic scenarios with three model architecture demonstrate that SICL reduces calibration error by an average of 13 percentage points compared to conventional calibration approaches.