๐ค AI Summary
In semi-supervised semantic segmentation, conventional pseudo-labeling relies on manually preset confidence thresholds, which are suboptimal under scarce labeled data. To address this, we propose a dynamic feedback-driven reliability assessment framework. Our method introduces a novel class-aware true-positive confidence estimation mechanism, integrating multi-teacher ensemble confidence modeling, online class-conditional true-positive rate estimation, adaptive threshold updating, and response-feedback reinforcement within a closed loopโenabling threshold-free, self-adaptive pseudo-label selection. This framework departs from static threshold paradigms and demonstrates consistent improvements across PASCAL VOC and Cityscapes benchmarks with various backbone architectures: it achieves up to a 3.2% mIoU gain under extreme label scarcity, without incurring additional annotation cost or computational overhead.
๐ Abstract
Semi-supervised learning leverages unlabeled data to enhance model performance, addressing the limitations of fully supervised approaches. Among its strategies, pseudo-supervision has proven highly effective, typically relying on one or multiple teacher networks to refine pseudo-labels before training a student network. A common practice in pseudo-supervision is filtering pseudo-labels based on pre-defined confidence thresholds or entropy. However, selecting optimal thresholds requires large labeled datasets, which are often scarce in real-world semi-supervised scenarios. To overcome this challenge, we propose Ensemble-of-Confidence Reinforcement (ENCORE), a dynamic feedback-driven thresholding strategy for pseudo-label selection. Instead of relying on static confidence thresholds, ENCORE estimates class-wise true-positive confidence within the unlabeled dataset and continuously adjusts thresholds based on the model's response to different levels of pseudo-label filtering. This feedback-driven mechanism ensures the retention of informative pseudo-labels while filtering unreliable ones, enhancing model training without manual threshold tuning. Our method seamlessly integrates into existing pseudo-supervision frameworks and significantly improves segmentation performance, particularly in data-scarce conditions. Extensive experiments demonstrate that integrating ENCORE with existing pseudo-supervision frameworks enhances performance across multiple datasets and network architectures, validating its effectiveness in semi-supervised learning.