🤖 AI Summary
In medical image segmentation, existing prototype-based methods typically employ fixed class-level prototypes, limiting their ability to capture intra-class variation and sample diversity. To address this, we propose Instance-Adaptive Prototype Learning (IAPL), the first framework that jointly models universal class prototypes and instance-specific prototypes, enabling fine-grained segmentation via pixel-wise contrastive learning. We further introduce a confidence-weighted feature reweighting mechanism and a hierarchical Transformer decoder to enhance modeling of complex anatomical structures, alongside a self-supervised foreground filtering strategy that focuses learning on salient regions. Evaluated on multiple public medical imaging benchmarks, IAPL consistently outperforms state-of-the-art methods, demonstrating superior robustness to intra-class variability and strong generalization capability.
📝 Abstract
Medical Image Segmentation (MIS) plays a crucial role in medical therapy planning and robot navigation. Prototype learning methods in MIS focus on generating segmentation masks through pixel-to-prototype comparison. However, current approaches often overlook sample diversity by using a fixed prototype per semantic class and neglect intra-class variation within each input. In this paper, we propose to generate instance-adaptive prototypes for MIS, which integrates a common prototype proposal (CPP) capturing common visual patterns and an instance-specific prototype proposal (IPP) tailored to each input. To further account for the intra-class variation, we propose to guide the IPP generation by re-weighting the intermediate feature map according to their confidence scores. These confidence scores are hierarchically generated using a transformer decoder. Additionally we introduce a novel self-supervised filtering strategy to prioritize the foreground pixels during the training of the transformer decoder. Extensive experiments demonstrate favorable performance of our method.