🤖 AI Summary
Universal Domain Adaptation (UniDA) faces challenges arising from inconsistent label spaces between source and target domains and the presence of target-private classes. Existing visual-space alignment methods are vulnerable to visual ambiguity induced by content discrepancies. To address this, we propose a training-free label-space alignment paradigm that leverages the zero-shot capability of vision-language models (e.g., CLIP) to construct a universal classifier covering both shared and target-private classes. Our approach integrates generative unknown-class identification, label filtering, and semantic refinement to mitigate label noise and semantic ambiguity. Evaluated on the DomainBed benchmark, our method improves H-score and H³-score by 7.9% and 6.1%, respectively; incorporating self-training yields an additional 1.6% gain. These results demonstrate substantial improvements in cross-domain robustness and generalization.
📝 Abstract
Universal domain adaptation (UniDA) transfers knowledge from a labeled source domain to an unlabeled target domain, where label spaces may differ and the target domain may contain private classes. Previous UniDA methods primarily focused on visual space alignment but often struggled with visual ambiguities due to content differences, which limited their robustness and generalizability. To overcome this, we introduce a novel approach that leverages the strong extit{zero-shot capabilities} of recent vision-language foundation models (VLMs) like CLIP, concentrating solely on label space alignment to enhance adaptation stability. CLIP can generate task-specific classifiers based only on label names. However, adapting CLIP to UniDA is challenging because the label space is not fully known in advance. In this study, we first utilize generative vision-language models to identify unknown categories in the target domain. Noise and semantic ambiguities in the discovered labels -- such as those similar to source labels (e.g., synonyms, hypernyms, hyponyms) -- complicate label alignment. To address this, we propose a training-free label-space alignment method for UniDA (ours). Our method aligns label spaces instead of visual spaces by filtering and refining noisy labels between the domains. We then construct a extit{universal classifier} that integrates both shared knowledge and target-private class information, thereby improving generalizability under domain shifts. The results reveal that the proposed method considerably outperforms existing UniDA techniques across key DomainBed benchmarks, delivering an average improvement of extcolor{blue}{+7.9%}in H-score and extcolor{blue}{+6.1%} in H$^3$-score. Furthermore, incorporating self-training further enhances performance and achieves an additional ( extcolor{blue}{+1.6%}) increment in both H- and H$^3$-scores.