🤖 AI Summary
This work addresses the challenge of efficiently constructing high-quality training sets from large-scale data servers in unsupervised domain adaptation (UDA) scenarios where the target domain is accessible but unlabeled. We propose a hierarchical data server–based training set search framework, which—unlike conventional model-centric UDA approaches—introduces server structure optimization into UDA for the first time. Central to our method is a bipartite graph matching (BMM) mechanism that enables optimal one-to-one alignment between the semantic distributions of source and target domains. Experiments on person re-identification and object detection benchmarks demonstrate that the selected training sets significantly reduce domain discrepancy and outperform existing training set selection methods. Moreover, when integrated with complementary UDA techniques such as pseudo-labeling, our approach yields further performance gains.
📝 Abstract
We explore a situation in which the target domain is accessible, but real-time data annotation is not feasible. Instead, we would like to construct an alternative training set from a large-scale data server so that a competitive model can be obtained. For this problem, because the target domain usually exhibits distinct modes (i.e., semantic clusters representing data distribution), if the training set does not contain these target modes, the model performance would be compromised. While prior existing works improve algorithms iteratively, our research explores the often-overlooked potential of optimizing the structure of the data server. Inspired by the hierarchical nature of web search engines, we introduce a hierarchical data server, together with a bipartite mode matching algorithm (BMM) to align source and target modes. For each target mode, we look in the server data tree for the best mode match, which might be large or small in size. Through bipartite matching, we aim for all target modes to be optimally matched with source modes in a one-on-one fashion. Compared with existing training set search algorithms, we show that the matched server modes constitute training sets that have consistently smaller domain gaps with the target domain across object re-identification (re-ID) and detection tasks. Consequently, models trained on our searched training sets have higher accuracy than those trained otherwise. BMM allows data-centric unsupervised domain adaptation (UDA) orthogonal to existing model-centric UDA methods. By combining the BMM with existing UDA methods like pseudo-labeling, further improvement is observed.