🤖 AI Summary
This work addresses the challenge of robust medical image segmentation under realistic clinical conditions characterized by scarce annotations and image degradations such as motion blur and low-dose CT noise. To this end, the authors propose a bidirectional vision–language fusion framework that iteratively refines multimodal interactions between image features and textual descriptions. Enhanced consistency regularization is introduced to stabilize training dynamics. Leveraging a contrastive language–image pretraining architecture, the model achieves state-of-the-art segmentation accuracy and robustness on the QaTa-COV19 and MosMedData+ benchmarks, significantly outperforming existing methods even when trained with only 1% labeled data.
📝 Abstract
Medical image segmentation is a cornerstone of computer-assisted diagnosis and treatment planning. While recent multimodal vision-language models have shown promise in enhancing semantic understanding through textual descriptions, their resilience in "in-the-wild" clinical settings-characterized by scarce annotations and hardware-induced image degradations-remains under-explored.
We introduce BiCLIP (Bidirectional and Consistent Language-Image Processing), a framework engineered to bolster robustness in medical segmentation. BiCLIP features a bidirectional multimodal fusion mechanism that enables visual features to iteratively refine textual representations, ensuring superior semantic alignment. To further stabilize learning, we implement an augmentation consistency objective that regularizes intermediate representations against perturbed input views.
Evaluation on the QaTa-COV19 and MosMedData+ benchmarks demonstrates that BiCLIP consistently surpasses state-of-the-art image-only and multimodal baselines. Notably, BiCLIP maintains high performance when trained on as little as 1% of labeled data and exhibits significant resistance to clinical artifacts, including motion blur and low-dose CT noise.