🤖 AI Summary
This work identifies and systematically analyzes the pervasive “over-confidence” problem in unsupervised domain adaptation (UDA): models produce high-confidence predictions on the target domain, yet suffer from severe miscalibration—degrading prediction reliability. Addressing the limitation of existing test-time adaptation methods—which rely on entropy minimization and inadvertently exacerbate calibration bias—we propose a novel optimization paradigm that jointly pursues accuracy and calibration. Our approach integrates entropy regularization for confidence correction, learnable temperature scaling, and domain-aware confidence constraints. Evaluated on standard benchmarks including Office-31 and VisDA, our method maintains state-of-the-art (SOTA) accuracy while reducing the Expected Calibration Error (ECE) by over 40%, significantly enhancing predictive trustworthiness.
📝 Abstract
When neural networks are confronted with unfamiliar data that deviate from their training set, this signifies a domain shift. While these networks output predictions on their inputs, they typically fail to account for their level of familiarity with these novel observations. While prevailing works navigate unsupervised domain adaptation with the goal of curtailing model entropy, they unintentionally birth models that grapple with sub-optimal calibration - a dilemma we term the over-certainty phenomenon. In this paper, we uncover a concerning trend in unsupervised domain adaptation and propose a solution that not only maintains accuracy but also addresses calibration.