USE: Uncertainty Structure Estimation for Robust Semi-Supervised Learning

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of semi-supervised learning in practical settings caused by out-of-distribution (OOD) samples within unlabeled data. To mitigate this issue, the authors propose Uncertainty-based Structure Estimation (USE), which reframes data quality control as a structural informativeness assessment. Specifically, a lightweight proxy model computes the entropy of unlabeled samples, and a threshold derived from statistical hypothesis testing is employed to retain only those samples exhibiting meaningful structural information while discarding harmful or uninformative ones. The method is algorithm-agnostic and computationally efficient, consistently improving model accuracy and robustness across varying levels of OOD contamination on benchmarks such as CIFAR-100 and Yelp Review. These results underscore the critical role of effective data filtering in enhancing the reliability of semi-supervised learning.

Technology Category

Application Category

📝 Abstract
In this study, a novel idea, Uncertainty Structure Estimation (USE), a lightweight, algorithm-agnostic procedure that emphasizes the often-overlooked role of unlabeled data quality is introduced for Semi-supervised learning (SSL). SSL has achieved impressive progress, but its reliability in deployment is limited by the quality of the unlabeled pool. In practice, unlabeled data are almost always contaminated by out-of-distribution (OOD) samples, where both near-OOD and far-OOD can negatively affect performance in different ways. We argue that the bottleneck does not lie in algorithmic design, but rather in the absence of principled mechanisms to assess and curate the quality of unlabeled data. The proposed USE trains a proxy model on the labeled set to compute entropy scores for unlabeled samples, and then derives a threshold, via statistical comparison against a reference distribution, that separates informative (structured) from uninformative (structureless) samples. This enables assessment as a preprocessing step, removing uninformative or harmful unlabeled data before SSL training begins. Through extensive experiments on imaging (CIFAR-100) and NLP (Yelp Review) data, it is evident that USE consistently improves accuracy and robustness under varying levels of OOD contamination. Thus, it can be concluded that the proposed approach reframes unlabeled data quality control as a structural assessment problem, and considers it as a necessary component for reliable and efficient SSL in realistic mixed-distribution environments.
Problem

Research questions and friction points this paper is trying to address.

semi-supervised learning
out-of-distribution
unlabeled data quality
robustness
data curation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty Structure Estimation
Semi-supervised Learning
Out-of-Distribution Detection
Data Quality Assessment
Entropy-based Filtering
🔎 Similar Papers
No similar papers found.
T
Tsao-Lun Chen
National Taiwan University of Science and Technology, Taipei, Taiwan
C
Chien-Liang Liu
Chang Gung University, Taoyuan, Taiwan
Tzu-Ming Harry Hsu
Tzu-Ming Harry Hsu
Massachusetts Institute of Technology
AI for HealthcareFederated LearningDeep LearningComputer Vision
T
Tai-Hsien Wu
The Ohio State University, Columbus, OH, USA
C
Chi-Cheng Fu
NVIDIA, Santa Clara, CA, USA
H
Han-Yi E. Chou
National Taiwan University, Taipei, Taiwan
Shun-Feng Su
Shun-Feng Su
Professor of EE, National Taiwan University of Science and Technology
intelligent systems