🤖 AI Summary
This paper addresses the challenge of sample selection in multi-source active domain adaptation (MS-ADA), where concurrent inter-class diversity and multi-source domain shifts impede reliable query selection. To tackle this, we propose GALA—a global-local joint selection strategy that synergistically integrates k-means-based global clustering with intra-cluster uncertainty-driven local selection. GALA introduces a parameter-free, plug-and-play sample selection framework requiring no additional trainable parameters. While maintaining computational efficiency, GALA significantly improves target-labeling efficiency: on three standard benchmarks, it achieves near fully supervised performance using only 1% labeled target data—substantially outperforming existing MS-ADA methods. Its core contribution lies in decoupling the modeling of cross-domain discrepancies from intra-class distribution heterogeneity, establishing a novel paradigm for multi-source active learning.
📝 Abstract
Domain Adaptation (DA) provides an effective way to tackle target-domain tasks by leveraging knowledge learned from source domains. Recent studies have extended this paradigm to Multi-Source Domain Adaptation (MSDA), which exploits multiple source domains carrying richer and more diverse transferable information. However, a substantial performance gap still remains between adaptation-based methods and fully supervised learning. In this paper, we explore a more practical and challenging setting, named Multi-Source Active Domain Adaptation (MS-ADA), to further enhance target-domain performance by selectively acquiring annotations from the target domain. The key difficulty of MS-ADA lies in designing selection criteria that can jointly handle inter-class diversity and multi-source domain variation. To address these challenges, we propose a simple yet effective GALA strategy (GALA), which combines a global k-means clustering step for target-domain samples with a cluster-wise local selection criterion, effectively tackling the above two issues in a complementary manner. Our proposed GALA is plug-and-play and can be seamlessly integrated into existing DA frameworks without introducing any additional trainable parameters. Extensive experiments on three standard DA benchmarks demonstrate that GALA consistently outperforms prior active learning and active DA methods, achieving performance comparable to the fully-supervised upperbound while using only 1% of the target annotations.