🤖 AI Summary
This work addresses the challenge of robust semi-supervised learning (SSL) in open-world settings, where unseen classes, distribution shifts, and noisy labels coexist—particularly exacerbating performance degradation when unlabeled data quality is substantially lower than that of labeled data. To tackle this, we propose a novel framework integrating dynamic class prototype alignment and uncertainty-aware pseudo-label refinement. Our method synergistically combines contrastive learning, prototype-based representation learning, Bayesian uncertainty estimation, and adaptive-threshold pseudo-labeling. Crucially, it achieves the first joint optimization of open-set robustness and SSL accuracy. Extensive experiments on open benchmarks—including CIFAR-10-C and WebVision-LT—demonstrate consistent improvements: +5.2% classification accuracy, +8.7% F1-score for out-of-distribution detection, and significantly enhanced generalization across diverse domain shifts and label noise regimes.