🤖 AI Summary
In real-world deployment, distribution shifts in test data degrade the generalization of deep models. Existing test-time adaptation (TTA) methods either rely on specific normalization layers or fail to adequately model complex activation distributions. This paper proposes a retraining-free TTA method based on channel-wise quantile alignment. Our approach employs a quantile recalibration mechanism to fully characterize activation distribution shapes, ensuring compatibility with BatchNorm, GroupNorm, and LayerNorm. Additionally, we introduce a robust tail calibration strategy to mitigate instability in tail distribution estimation under small batch sizes. Extensive experiments on CIFAR-10-C, CIFAR-100-C, and ImageNet-C demonstrate significant improvements over state-of-the-art methods. The proposed method achieves strong robustness to diverse corruptions and exhibits cross-architecture generalization, making it particularly suitable for dynamic, resource-constrained real-world deployment scenarios.
📝 Abstract
Domain adaptation is a key strategy for enhancing the generalizability of deep learning models in real-world scenarios, where test distributions often diverge significantly from the training domain. However, conventional approaches typically rely on prior knowledge of the target domain or require model retraining, limiting their practicality in dynamic or resource-constrained environments. Recent test-time adaptation methods based on batch normalization statistic updates allow for unsupervised adaptation, but they often fail to capture complex activation distributions and are constrained to specific normalization layers. We propose Adaptive Quantile Recalibration (AQR), a test-time adaptation technique that modifies pre-activation distributions by aligning quantiles on a channel-wise basis. AQR captures the full shape of activation distributions and generalizes across architectures employing BatchNorm, GroupNorm, or LayerNorm. To address the challenge of estimating distribution tails under varying batch sizes, AQR incorporates a robust tail calibration strategy that improves stability and precision. Our method leverages source-domain statistics computed at training time, enabling unsupervised adaptation without retraining models. Experiments on CIFAR-10-C, CIFAR-100-C, and ImageNet-C across multiple architectures demonstrate that AQR achieves robust adaptation across diverse settings, outperforming existing test-time adaptation baselines. These results highlight AQR's potential for deployment in real-world scenarios with dynamic and unpredictable data distributions.