🤖 AI Summary
To address the high reliance on labeled data and insufficient exploitation of unlabeled data in semi-supervised semantic segmentation, this paper proposes SegKC, a knowledge-consulting co-training paradigm. SegKC innovatively integrates knowledge distillation into a cross-pseudo-supervision framework, employing two heterogeneous CNN backbones to enable structured knowledge complementarity—beyond mere label exchange—and introduces a knowledge-consulting loss, consistency regularization, and a lightweight feature alignment module. On Pascal VOC, SegKC achieves 87.1% mIoU using only 25% of labeled data and 89.2% with 50%, approaching fully supervised performance (89.8%). Comparable gains are observed on Cityscapes, while maintaining a compact model size. This work elucidates an effective mechanism for cross-model knowledge transfer within deep co-training frameworks.
📝 Abstract
Semi-Supervised Semantic Segmentation reduces reliance on extensive annotations by using unlabeled data and state-of-the-art models to improve overall performance. Despite the success of deep co-training methods, their underlying mechanisms remain underexplored. This work revisits Cross Pseudo Supervision with dual heterogeneous backbones and introduces Knowledge Consultation (SegKC) to further enhance segmentation performance. The proposed SegKC achieves significant improvements on Pascal and Cityscapes benchmarks, with mIoU scores of 87.1%, 89.2%, and 89.8% on Pascal VOC with the 1/4, 1/2, and full split partition, respectively, while maintaining a compact model architecture.