🤖 AI Summary
Medical image segmentation is hindered by scarce annotations, ambiguous anatomical boundaries, and limited cross-domain generalization. To address these challenges, this work proposes a CLIP-based probabilistic vision-language adaptation framework that enables bidirectional image-text interaction through patch-level embeddings and a probabilistic cross-modal attention mechanism. The approach incorporates a soft patch-level contrastive loss and explicit uncertainty quantification. Evaluated across 16 datasets spanning five imaging modalities and six organ classes, the method consistently outperforms state-of-the-art approaches in segmentation accuracy, data efficiency, robustness, and interpretability, while also producing reliable uncertainty maps.
📝 Abstract
Medical image segmentation remains challenging due to limited annotations for training, ambiguous anatomical features, and domain shifts. While vision-language models such as CLIP offer strong cross-modal representations, their potential for dense, text-guided medical image segmentation remains underexplored. We present MedCLIPSeg, a novel framework that adapts CLIP for robust, data-efficient, and uncertainty-aware medical image segmentation. Our approach leverages patch-level CLIP embeddings through probabilistic cross-modal attention, enabling bidirectional interaction between image and text tokens and explicit modeling of predictive uncertainty. Together with a soft patch-level contrastive loss that encourages more nuanced semantic learning across diverse textual prompts, MedCLIPSeg effectively improves data efficiency and domain generalizability. Extensive experiments across 16 datasets spanning five imaging modalities and six organs demonstrate that MedCLIPSeg outperforms prior methods in accuracy, efficiency, and robustness, while providing interpretable uncertainty maps that highlight local reliability of segmentation results. This work demonstrates the potential of probabilistic vision-language modeling for text-driven medical image segmentation.