MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation

📅 2024-09-28
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Medical image segmentation faces challenges including scarce annotated data, poor generalizability, and limited interactivity. To address these, we propose the first text-driven zero-shot/weakly supervised framework for general medical image segmentation. Our method comprises three key components: (1) a decoupled hard-negative contrastive loss to optimize BiomedCLIP, enhancing robustness of text–image alignment; (2) a multimodal information bottleneck (M2IB) module that generates text-guided cross-modal visual prompts; and (3) a novel weakly supervised fine-tuning strategy leveraging zero-shot segmentation outputs as pseudo-labels, thereby reducing annotation dependency. Integrating BiomedCLIP with SAMv2, our framework achieves state-of-the-art performance across four diverse modalities—breast ultrasound, brain tumor MRI, chest X-ray, and lung CT—outperforming existing zero-shot and weakly supervised approaches in accuracy, cross-domain generalization, and annotation efficiency.

Technology Category

Application Category

📝 Abstract
Segmentation of anatomical structures and pathological regions in medical images is essential for modern clinical diagnosis, disease research, and treatment planning. While significant advancements have been made in deep learning-based segmentation techniques, many of these methods still suffer from limitations in data efficiency, generalizability, and interactivity. As a result, developing precise segmentation methods that require fewer labeled datasets remains a critical challenge in medical image analysis. Recently, the introduction of foundation models like CLIP and Segment-Anything-Model (SAM), with robust cross-domain representations, has paved the way for interactive and universal image segmentation. However, further exploration of these models for data-efficient segmentation in medical imaging is still needed and highly relevant. In this paper, we introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans using text prompts, in both zero-shot and weakly supervised settings. Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss, and leveraging the Multi-modal Information Bottleneck (M2IB) to create visual prompts for generating segmentation masks from SAM in the zero-shot setting. We also investigate using zero-shot segmentation labels within a weakly supervised paradigm to enhance segmentation quality further. Extensive testing across four diverse segmentation tasks and medical imaging modalities (breast tumor ultrasound, brain tumor MRI, lung X-ray, and lung CT) demonstrates the high accuracy of our proposed framework. Our code is available at https://github.com/HealthX-Lab/MedCLIP-SAMv2.
Problem

Research questions and friction points this paper is trying to address.

Enhance medical image segmentation accuracy.
Reduce labeled data dependency in segmentation.
Improve generalizability across diverse medical imaging tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates CLIP and SAM models
Uses text prompts for segmentation
Implements DHN-NCE and M2IB techniques
🔎 Similar Papers
No similar papers found.
T
Taha Koleilat
Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada
H
Hojat Asgariandehkordi
Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada
H
H. Rivaz
Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada
Yiming Xiao
Yiming Xiao
Associate Professor, Department of Computer Science and Software Engineering, Concordia University
Biomedical AIMedical VRmedical image analysisimage-guided surgerycomputer-assisted diagnosis