🤖 AI Summary
To address the scarcity of labeled data for disease severity assessment in medical imaging, this paper proposes SEMISE, a semi-supervised representation learning framework. SEMISE is the first to incorporate severity-aware modeling into semi-supervised medical image analysis, jointly integrating a SimCLR variant for self-supervised contrastive learning, supervised cross-entropy loss, multi-scale feature enhancement, and a lightweight severity projection head—unified via dual-path consistency regularization and semantic alignment. Under limited labeling budgets, SEMISE significantly enhances discriminative representation learning for severity grading. Experiments demonstrate that SEMISE achieves +12% absolute improvement over baselines on classification and +3% on segmentation, outperforming state-of-the-art semi-supervised and self-supervised methods. The framework thus achieves synergistic optimization of label efficiency and representation quality.
📝 Abstract
This paper introduces SEMISE, a novel method for representation learning in medical imaging that combines self-supervised and supervised learning. By leveraging both labeled and augmented data, SEMISE addresses the challenge of data scarcity and enhances the encoder's ability to extract meaningful features. This integrated approach leads to more informative representations, improving performance on downstream tasks. As result, our approach achieved a 12% improvement in classification and a 3% improvement in segmentation, outperforming existing methods. These results demonstrate the potential of SIMESE to advance medical image analysis and offer more accurate solutions for healthcare applications, particularly in contexts where labeled data is limited.