Autonomous Source Knowledge Selection in Multi-Domain Adaptation

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the redundancy and irrelevant knowledge interference caused by an excessive number of source domains in unsupervised multi-source domain adaptation (UMDA), this paper proposes a density-driven autonomous source knowledge selection framework. The method jointly performs density estimation and pseudo-label enhancement to enable dual-granularity, learnable filtering—operating at both the sample level (to identify highly transferable source samples) and the model level (to select optimal source models)—in a dynamic manner. Innovatively integrating pretrained multimodal representations, multi-source ensemble learning, pseudo-label denoising, and self-supervised optimization, the framework significantly improves generalization on the target domain. Extensive experiments on multiple real-world benchmarks demonstrate substantial gains over state-of-the-art methods; notably, under the challenging 100-source-domain setting, it achieves an average accuracy improvement of over 5.2%.

Technology Category

Application Category

📝 Abstract
Unsupervised multi-domain adaptation plays a key role in transfer learning by leveraging acquired rich source information from multiple source domains to solve target task from an unlabeled target domain. However, multiple source domains often contain much redundant or unrelated information which can harm transfer performance, especially when in massive-source domain settings. It is urgent to develop effective strategies for identifying and selecting the most transferable knowledge from massive source domains to address the target task. In this paper, we propose a multi-domain adaptation method named underline{ extit{Auto}}nomous Source Knowledge underline{ extit{S}}election (AutoS) to autonomosly select source training samples and models, enabling the prediction of target task using more relevant and transferable source information. The proposed method employs a density-driven selection strategy to choose source samples during training and to determine which source models should contribute to target prediction. Simulteneously, a pseudo-label enhancement module built on a pre-trained multimodal modal is employed to mitigate target label noise and improve self-supervision. Experiments on real-world datasets indicate the superiority of the proposed method.
Problem

Research questions and friction points this paper is trying to address.

Selects relevant source samples and models for adaptation
Reduces redundant information from multiple source domains
Mitigates target label noise with pseudo-label enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autonomously selects source samples and models
Uses density-driven strategy for selection
Employs pseudo-label enhancement for self-supervision
🔎 Similar Papers
No similar papers found.
K
Keqiuyin Li
Australian Artificial Intelligence Institute, University of Technology Sydney, Sydney, NSW, Australia
J
Jie Lu
Australian Artificial Intelligence Institute, University of Technology Sydney, Sydney, NSW, Australia
Hua Zuo
Hua Zuo
University of Technology Sydney
transfer learningdomain adaptationmachine learningfuzzy systems
Guangquan Zhang
Guangquan Zhang
University of Technology Sydney, Australia
fuzzy sets and systemsmachine learningdecision support systems