🤖 AI Summary
This work addresses the challenges of class ambiguity and prediction bias in semi-supervised domain adaptation, where labeled samples in the target domain are scarce. To mitigate these issues, the authors propose a multi-view consistency learning framework that leverages strong data augmentation to construct dual training views. In one view, a class-debiasing strategy refines the prediction distribution, while the other generates pseudo-negative labels to enhance inter-class separability. Additionally, cross-domain affinity learning is employed to align features of the same class across domains. By innovatively integrating debiased prediction, pseudo-negative labeling, and cross-domain alignment, the method significantly improves model generalization and class discriminability under limited annotation. Experiments on DomainNet and Office-Home demonstrate superior performance over existing approaches, effectively reducing labeling costs while enhancing cross-domain adaptation.