π€ AI Summary
This paper addresses open-set domain generalization (OSDG) under label noise. To mitigate source-domain knowledge corruption caused by noisy labels and bridge inter-domain distribution shifts, we propose an uncertainty-aware knowledge transfer framework. Our method introduces an evidential reliability-aware mechanism to dynamically assess sample confidence; designs a residual flow matching strategy to align feature flows of known classes with rejection flows of unknown classes; and integrates two-stage unsupervised evidential clustering, clean-set-guided meta-learning, and high-quality pseudo-label optimization. Extensive experiments on multiple benchmarks demonstrate significant improvements over state-of-the-art methods, validating the frameworkβs robustness under label noise, as well as its simultaneous enhancement of known-class classification accuracy and unknown-class rejection capability.
π Abstract
Open-Set Domain Generalization (OSDG) aims to enable deep learning models to recognize unseen categories in new domains, which is crucial for real-world applications. Label noise hinders open-set domain generalization by corrupting source-domain knowledge, making it harder to recognize known classes and reject unseen ones. While existing methods address OSDG under Noisy Labels (OSDG-NL) using hyperbolic prototype-guided meta-learning, they struggle to bridge domain gaps, especially with limited clean labeled data. In this paper, we propose Evidential Reliability-Aware Residual Flow Meta-Learning (EReLiFM). We first introduce an unsupervised two-stage evidential loss clustering method to promote label reliability awareness. Then, we propose a residual flow matching mechanism that models structured domain- and category-conditioned residuals, enabling diverse and uncertainty-aware transfer paths beyond interpolation-based augmentation. During this meta-learning process, the model is optimized such that the update direction on the clean set maximizes the loss decrease on the noisy set, using pseudo labels derived from the most confident predicted class for supervision. Experimental results show that EReLiFM outperforms existing methods on OSDG-NL, achieving state-of-the-art performance. The source code is available at https://github.com/KPeng9510/ERELIFM.