EReLiFM: Evidential Reliability-Aware Residual Flow Meta-Learning for Open-Set Domain Generalization under Noisy Labels

πŸ“… 2025-10-14
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses open-set domain generalization (OSDG) under label noise. To mitigate source-domain knowledge corruption caused by noisy labels and bridge inter-domain distribution shifts, we propose an uncertainty-aware knowledge transfer framework. Our method introduces an evidential reliability-aware mechanism to dynamically assess sample confidence; designs a residual flow matching strategy to align feature flows of known classes with rejection flows of unknown classes; and integrates two-stage unsupervised evidential clustering, clean-set-guided meta-learning, and high-quality pseudo-label optimization. Extensive experiments on multiple benchmarks demonstrate significant improvements over state-of-the-art methods, validating the framework’s robustness under label noise, as well as its simultaneous enhancement of known-class classification accuracy and unknown-class rejection capability.

Technology Category

Application Category

πŸ“ Abstract
Open-Set Domain Generalization (OSDG) aims to enable deep learning models to recognize unseen categories in new domains, which is crucial for real-world applications. Label noise hinders open-set domain generalization by corrupting source-domain knowledge, making it harder to recognize known classes and reject unseen ones. While existing methods address OSDG under Noisy Labels (OSDG-NL) using hyperbolic prototype-guided meta-learning, they struggle to bridge domain gaps, especially with limited clean labeled data. In this paper, we propose Evidential Reliability-Aware Residual Flow Meta-Learning (EReLiFM). We first introduce an unsupervised two-stage evidential loss clustering method to promote label reliability awareness. Then, we propose a residual flow matching mechanism that models structured domain- and category-conditioned residuals, enabling diverse and uncertainty-aware transfer paths beyond interpolation-based augmentation. During this meta-learning process, the model is optimized such that the update direction on the clean set maximizes the loss decrease on the noisy set, using pseudo labels derived from the most confident predicted class for supervision. Experimental results show that EReLiFM outperforms existing methods on OSDG-NL, achieving state-of-the-art performance. The source code is available at https://github.com/KPeng9510/ERELIFM.
Problem

Research questions and friction points this paper is trying to address.

Recognizing unseen categories in new domains under noisy labels
Bridging domain gaps with limited clean labeled data
Handling corrupted source-domain knowledge from label noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evidential loss clustering enhances label reliability
Residual flow matching enables uncertainty-aware domain transfer
Meta-learning optimizes clean set updates for noisy supervision
πŸ”Ž Similar Papers
No similar papers found.