Training-Free Label Space Alignment for Universal Domain Adaptation

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Universal Domain Adaptation (UniDA) faces challenges arising from inconsistent label spaces between source and target domains and the presence of target-private classes. Existing visual-space alignment methods are vulnerable to visual ambiguity induced by content discrepancies. To address this, we propose a training-free label-space alignment paradigm that leverages the zero-shot capability of vision-language models (e.g., CLIP) to construct a universal classifier covering both shared and target-private classes. Our approach integrates generative unknown-class identification, label filtering, and semantic refinement to mitigate label noise and semantic ambiguity. Evaluated on the DomainBed benchmark, our method improves H-score and H³-score by 7.9% and 6.1%, respectively; incorporating self-training yields an additional 1.6% gain. These results demonstrate substantial improvements in cross-domain robustness and generalization.

Technology Category

Application Category

📝 Abstract
Universal domain adaptation (UniDA) transfers knowledge from a labeled source domain to an unlabeled target domain, where label spaces may differ and the target domain may contain private classes. Previous UniDA methods primarily focused on visual space alignment but often struggled with visual ambiguities due to content differences, which limited their robustness and generalizability. To overcome this, we introduce a novel approach that leverages the strong extit{zero-shot capabilities} of recent vision-language foundation models (VLMs) like CLIP, concentrating solely on label space alignment to enhance adaptation stability. CLIP can generate task-specific classifiers based only on label names. However, adapting CLIP to UniDA is challenging because the label space is not fully known in advance. In this study, we first utilize generative vision-language models to identify unknown categories in the target domain. Noise and semantic ambiguities in the discovered labels -- such as those similar to source labels (e.g., synonyms, hypernyms, hyponyms) -- complicate label alignment. To address this, we propose a training-free label-space alignment method for UniDA (ours). Our method aligns label spaces instead of visual spaces by filtering and refining noisy labels between the domains. We then construct a extit{universal classifier} that integrates both shared knowledge and target-private class information, thereby improving generalizability under domain shifts. The results reveal that the proposed method considerably outperforms existing UniDA techniques across key DomainBed benchmarks, delivering an average improvement of extcolor{blue}{+7.9%}in H-score and extcolor{blue}{+6.1%} in H$^3$-score. Furthermore, incorporating self-training further enhances performance and achieves an additional ( extcolor{blue}{+1.6%}) increment in both H- and H$^3$-scores.
Problem

Research questions and friction points this paper is trying to address.

Aligning label spaces between domains with unknown target categories
Addressing visual ambiguities in universal domain adaptation methods
Filtering noisy labels and semantic ambiguities across domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages CLIP's zero-shot capabilities for label space alignment
Uses generative models to identify unknown target domain categories
Constructs universal classifier integrating shared and private knowledge
🔎 Similar Papers
No similar papers found.
D
Dujin Lee
Department of Artificial Intelligence, Korea University, Republic of Korea
S
Sojung An
Department of Artificial Intelligence, Korea University, Republic of Korea
J
Jungmyung Wi
Department of Artificial Intelligence, Korea University, Republic of Korea
Kuniaki Saito
Kuniaki Saito
Boston University
Artificial IntelligenceMachine LearningComputer Vision
D
Donghyun Kim
Department of Artificial Intelligence, Korea University, Republic of Korea