🤖 AI Summary
This work addresses the performance degradation of semi-supervised learning in practical settings caused by out-of-distribution (OOD) samples within unlabeled data. To mitigate this issue, the authors propose Uncertainty-based Structure Estimation (USE), which reframes data quality control as a structural informativeness assessment. Specifically, a lightweight proxy model computes the entropy of unlabeled samples, and a threshold derived from statistical hypothesis testing is employed to retain only those samples exhibiting meaningful structural information while discarding harmful or uninformative ones. The method is algorithm-agnostic and computationally efficient, consistently improving model accuracy and robustness across varying levels of OOD contamination on benchmarks such as CIFAR-100 and Yelp Review. These results underscore the critical role of effective data filtering in enhancing the reliability of semi-supervised learning.
📝 Abstract
In this study, a novel idea, Uncertainty Structure Estimation (USE), a lightweight, algorithm-agnostic procedure that emphasizes the often-overlooked role of unlabeled data quality is introduced for Semi-supervised learning (SSL). SSL has achieved impressive progress, but its reliability in deployment is limited by the quality of the unlabeled pool. In practice, unlabeled data are almost always contaminated by out-of-distribution (OOD) samples, where both near-OOD and far-OOD can negatively affect performance in different ways. We argue that the bottleneck does not lie in algorithmic design, but rather in the absence of principled mechanisms to assess and curate the quality of unlabeled data. The proposed USE trains a proxy model on the labeled set to compute entropy scores for unlabeled samples, and then derives a threshold, via statistical comparison against a reference distribution, that separates informative (structured) from uninformative (structureless) samples. This enables assessment as a preprocessing step, removing uninformative or harmful unlabeled data before SSL training begins. Through extensive experiments on imaging (CIFAR-100) and NLP (Yelp Review) data, it is evident that USE consistently improves accuracy and robustness under varying levels of OOD contamination. Thus, it can be concluded that the proposed approach reframes unlabeled data quality control as a structural assessment problem, and considers it as a necessary component for reliable and efficient SSL in realistic mixed-distribution environments.