Simulations of Common Unsupervised Domain Adaptation Algorithms for Image Classification

📅 2025-02-15
🏛️ IEEE Transactions on Instrumentation and Measurement
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses unsupervised domain adaptation (UDA) for image classification, where labeled source-domain data and unlabeled target-domain data are available. We systematically evaluate mainstream UDA methods on standard benchmarks—Office-31 and Office-Home—within a unified experimental framework. Our key contribution is the first comparative analysis of Transformer-based UDA algorithms (e.g., SSRT) under varying data scales and domain shifts, revealing their robustness boundaries and failure modes. Implementing adversarial training, feature alignment, self-training, and safe self-refinement (SSRT) in PyTorch, we enable reproducible large-scale ablation studies. Results show SSRT achieves 91.6% accuracy on Office-31; however, it suffers significant degradation under small-batch settings—dropping to 72.4% on Office-Home—highlighting a critical practical limitation. This empirical finding provides essential guidance for deploying UDA methods in real-world scenarios with constrained computational resources.

Technology Category

Application Category

📝 Abstract
Traditional machine learning assumes that training and test sets are derived from the same distribution; however, this assumption does not always hold in practical applications. This distribution disparity can lead to severe performance drops when the trained model is used in new datasets. Domain adaptation (DA) is a machine learning technique that aims to address this problem by reducing the differences between domains. This article presents simulation-based algorithms of recent DA techniques, mainly related to unsupervised DA (UDA), where labels are available only in the source domain. Our study compares these techniques with public datasets and diverse characteristics, highlighting their respective strengths and drawbacks. For example, safe self-refinement for transformer-based DA (SSRT) achieved the highest accuracy (91.6%) in the office-31 dataset during our simulations, however, the accuracy dropped to 72.4% in the Office-Home dataset when using limited batch sizes. In addition to improving the reader’s comprehension of recent techniques in DA, our study also highlights challenges and upcoming directions for research in this domain. The codes are available at https://github.com/AIPMLab/Domain_Adaptation.
Problem

Research questions and friction points this paper is trying to address.

Addressing distribution disparity in datasets
Simulating unsupervised domain adaptation techniques
Comparing DA algorithms' performance across datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulation-based unsupervised domain adaptation
Transformer-based DA with Safe Self-Refinement
Public dataset comparison for DA techniques
🔎 Similar Papers
No similar papers found.