E-MLNet: Enhanced Mutual Learning for Universal Domain Adaptation with Sample-Specific Weighting

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Universal Domain Adaptation (UniDA) requires models to jointly perform known-class classification and unknown-class rejection under settings where source-domain labels are available, target-domain labels are absent, and the target-class set is entirely unknown. Existing mutual learning approaches (e.g., MLNet) suffer from diluted learning signals due to uniform weighting across all classifiers. To address this, we propose a dynamic weighted mutual learning framework: (1) a one-vs-all classifier bank is constructed, with sample-level prediction confidence used to adaptively focus on the most relevant class boundaries, thereby enhancing open-set entropy minimization; and (2) a sample-specific weighting mechanism is introduced to improve discriminative boundary modeling. Evaluated on VisDA and ImageCLEF benchmarks under both open-partial and open-set DA settings, our method outperforms MLNet on 22/31 and 19/31 tasks, respectively, achieving state-of-the-art average H-score.

Technology Category

Application Category

📝 Abstract
Universal Domain Adaptation (UniDA) seeks to transfer knowledge from a labeled source to an unlabeled target domain without assuming any relationship between their label sets, requiring models to classify known samples while rejecting unknown ones. Advanced methods like Mutual Learning Network (MLNet) use a bank of one-vs-all classifiers adapted via Open-set Entropy Minimization (OEM). However, this strategy treats all classifiers equally, diluting the learning signal. We propose the Enhanced Mutual Learning Network (E-MLNet), which integrates a dynamic weighting strategy to OEM. By leveraging the closed-set classifier's predictions, E-MLNet focuses adaptation on the most relevant class boundaries for each target sample, sharpening the distinction between known and unknown classes. We conduct extensive experiments on four challenging benchmarks: Office-31, Office-Home, VisDA-2017, and ImageCLEF. The results demonstrate that E-MLNet achieves the highest average H-scores on VisDA and ImageCLEF and exhibits superior robustness over its predecessor. E-MLNet outperforms the strong MLNet baseline in the majority of individual adaptation tasks -- 22 out of 31 in the challenging Open-Partial DA setting and 19 out of 31 in the Open-Set DA setting -- confirming the benefits of our focused adaptation strategy.
Problem

Research questions and friction points this paper is trying to address.

Enhancing mutual learning for universal domain adaptation
Addressing equal classifier treatment in existing methods
Improving known-unknown class distinction with dynamic weighting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic weighting strategy for OEM
Sample-specific class boundary adaptation
Enhanced mutual learning network
🔎 Similar Papers
No similar papers found.