๐ค AI Summary
This work addresses the challenge of semi-supervised semantic segmentation in computational pathology, where pixel-level annotations are scarce and pseudo-labels are often unreliable. The authors propose a dual-modality semantic alignment framework that, for the first time, integrates textโprototype dual semantic alignment into histopathology image segmentation. Built upon a pathology-pretrained Transformer encoder, the method jointly optimizes prototype-level and text-level alignment branches within a shared embedding space, enhanced by cross-view consistency constraints and multi-objective end-to-end training to mitigate class ambiguity and improve pseudo-label quality. Evaluated on the GlaS and CRAG datasets, the approach achieves state-of-the-art performance, yielding Dice score improvements of up to 2.6% and 8.6%, respectively, using only 10% of labeled data.
๐ Abstract
Semi-supervised semantic segmentation in computational pathology remains challenging due to scarce pixel-level annotations and unreliable pseudo-label supervision. We propose UniSemAlign, a dual-modal semantic alignment framework that enhances visual segmentation by injecting explicit class-level structure into pixel-wise learning. Built upon a pathology-pretrained Transformer encoder, UniSemAlign introduces complementary prototype-level and text-level alignment branches in a shared embedding space, providing structured guidance that reduces class ambiguity and stabilizes pseudo-label refinement. The aligned representations are fused with visual predictions to generate more reliable supervision for unlabeled histopathology images. The framework is trained end-to-end with supervised segmentation, cross-view consistency, and cross-modal alignment objectives. Extensive experiments on the GlaS and CRAG datasets demonstrate that UniSemAlign substantially outperforms recent semi-supervised baselines under limited supervision, achieving Dice improvements of up to 2.6% on GlaS and 8.6% on CRAG with only 10% labeled data, and strong improvements at 20% supervision. Code is available at: https://github.com/thailevann/UniSemAlign