Self-Supervised Learning for Pre-training Capsule Networks: Overcoming Medical Imaging Dataset Challenges

๐Ÿ“… 2025-02-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Capsule networks face significant challenges in colon polyp diagnosis due to limited medical imaging data, severe class imbalance, and distributional shiftโ€”hindering effective supervised training. Method: This paper introduces, for the first time, a self-supervised pretraining framework for capsule networks that jointly leverages contrastive learning (a SimCLR variant) and image inpainting/colorization as auxiliary tasks. To address the absence of pretrained capsule models and native pretraining support, we further propose a medical-image-aware transfer initialization strategy to enhance feature robustness and interpretability. Contribution/Results: Evaluated on the PICCOLO dataset, our approach achieves a 5.26% improvement in polyp classification accuracy. Results demonstrate the efficacy and generalizability of the proposed self-supervised paradigm under small-scale, imbalanced, and distributionally shifted medical imaging conditions.

Technology Category

Application Category

๐Ÿ“ Abstract
Deep learning techniques are increasingly being adopted in diagnostic medical imaging. However, the limited availability of high-quality, large-scale medical datasets presents a significant challenge, often necessitating the use of transfer learning approaches. This study investigates self-supervised learning methods for pre-training capsule networks in polyp diagnostics for colon cancer. We used the PICCOLO dataset, comprising 3,433 samples, which exemplifies typical challenges in medical datasets: small size, class imbalance, and distribution shifts between data splits. Capsule networks offer inherent interpretability due to their architecture and inter-layer information routing mechanism. However, their limited native implementation in mainstream deep learning frameworks and the lack of pre-trained versions pose a significant challenge. This is particularly true if aiming to train them on small medical datasets, where leveraging pre-trained weights as initial parameters would be beneficial. We explored two auxiliary self-supervised learning tasks, colourisation and contrastive learning, for capsule network pre-training. We compared self-supervised pre-trained models against alternative initialisation strategies. Our findings suggest that contrastive learning and in-painting techniques are suitable auxiliary tasks for self-supervised learning in the medical domain. These techniques helped guide the model to capture important visual features that are beneficial for the downstream task of polyp classification, increasing its accuracy by 5.26% compared to other weight initialisation methods.
Problem

Research questions and friction points this paper is trying to address.

Self-supervised pre-training for capsule networks.
Addressing small, imbalanced medical imaging datasets.
Improving polyp classification accuracy in diagnostics.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised learning for capsule networks
Contrastive learning enhances feature capture
In-painting aids medical image analysis
๐Ÿ”Ž Similar Papers
2024-03-07arXiv.orgCitations: 2