ProCal: Probability Calibration for Neighborhood-Guided Source-Free Domain Adaptation

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of source knowledge forgetting and overfitting to local noise in unsupervised domain adaptation, which often arise from an overreliance on neighborhood prediction similarity. To mitigate these issues, the authors propose a dynamic probability calibration method that leverages a dual-model co-prediction mechanism to adaptively calibrate neighborhood probabilities by fusing the initial outputs of the source model with the online predictions of the current model. The approach jointly optimizes a soft supervision loss and a diversity loss, effectively preserving discriminative source-domain information while suppressing interference from local noise. Extensive experiments across 31 cross-domain tasks on four public datasets demonstrate that the proposed method significantly alleviates knowledge forgetting and overfitting, achieving a balanced integration of source knowledge and target-specific information.

Technology Category

Application Category

📝 Abstract
Source-Free Domain Adaptation (SFDA) adapts pre-trained models to unlabeled target domains without requiring access to source data. Although state-of-the-art methods leveraging local neighborhood structures show promise for SFDA, they tend to over-rely on prediction similarity among neighbors. This over-reliance accelerates the forgetting of source knowledge and increases susceptibility to local noise overfitting. To address these issues, we introduce ProCal, a probability calibration method that dynamically calibrates neighborhood-based predictions through a dual-model collaborative prediction mechanism. ProCal integrates the source model's initial predictions with the current model's online outputs to effectively calibrate neighbor probabilities. This strategy not only mitigates the interference of local noise but also preserves the discriminative information from the source model, thereby achieving a balance between knowledge retention and domain adaptation. Furthermore, we design a joint optimization objective that combines a soft supervision loss with a diversity loss to guide the target model. Our theoretical analysis shows that ProCal converges to an equilibrium where source knowledge and target information are effectively fused, reducing both knowledge forgetting and overfitting. We validate the effectiveness of our approach through extensive experiments on 31 cross-domain tasks across four public datasets. Our code is available at: https://github.com/zhengyinghit/ProCal.
Problem

Research questions and friction points this paper is trying to address.

Source-Free Domain Adaptation
Probability Calibration
Neighborhood Structure
Knowledge Forgetting
Noise Overfitting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probability Calibration
Source-Free Domain Adaptation
Neighborhood Guidance
Dual-Model Collaboration
Knowledge Retention
🔎 Similar Papers
No similar papers found.