VLSM-Ensemble: Ensembling CLIP-based Vision-Language Models for Enhanced Medical Image Segmentation

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) such as CLIP and BiomedCLIP underperform specialized medical image segmentation architectures (e.g., CRIS), primarily due to their limited capacity for fine-grained spatial reasoning and anatomical detail modeling. Method: We propose a lightweight, text-prompt-free integration framework that synergistically fuses CLIP-based VLMs (e.g., BiomedCLIPSeg) with low-complexity CNNs via multi-dataset transfer learning to enhance generalization. The design jointly leverages global semantic understanding from VLMs and local feature fidelity from CNNs while preserving computational efficiency. Contribution/Results: Our method significantly improves segmentation robustness across diverse medical imaging domains. On the BKAI polyp dataset, it achieves a +6.3% Dice score improvement; on four other major medical segmentation benchmarks, gains range from +1.0% to +6.0%. Notably, it matches or surpasses CRIS in several scenarios, effectively narrowing the performance gap between general-purpose VLMs and task-specific segmentation models.

Technology Category

Application Category

📝 Abstract
Vision-language models and their adaptations to image segmentation tasks present enormous potential for producing highly accurate and interpretable results. However, implementations based on CLIP and BiomedCLIP are still lagging behind more sophisticated architectures such as CRIS. In this work, instead of focusing on text prompt engineering as is the norm, we attempt to narrow this gap by showing how to ensemble vision-language segmentation models (VLSMs) with a low-complexity CNN. By doing so, we achieve a significant Dice score improvement of 6.3% on the BKAI polyp dataset using the ensembled BiomedCLIPSeg, while other datasets exhibit gains ranging from 1% to 6%. Furthermore, we provide initial results on additional four radiology and non-radiology datasets. We conclude that ensembling works differently across these datasets (from outperforming to underperforming the CRIS model), indicating a topic for future investigation by the community. The code is available at https://github.com/juliadietlmeier/VLSM-Ensemble.
Problem

Research questions and friction points this paper is trying to address.

Enhancing medical image segmentation accuracy with CLIP-based models
Improving vision-language segmentation via low-complexity CNN ensembling
Bridging performance gap between CLIP adaptations and sophisticated architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensembling CLIP-based vision-language segmentation models
Integrating low-complexity CNN with VLSMs
Achieving significant Dice score improvements across datasets
🔎 Similar Papers
No similar papers found.