BiCLIP: Bidirectional and Consistent Language-Image Processing for Robust Medical Image Segmentation

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of robust medical image segmentation under realistic clinical conditions characterized by scarce annotations and image degradations such as motion blur and low-dose CT noise. To this end, the authors propose a bidirectional vision–language fusion framework that iteratively refines multimodal interactions between image features and textual descriptions. Enhanced consistency regularization is introduced to stabilize training dynamics. Leveraging a contrastive language–image pretraining architecture, the model achieves state-of-the-art segmentation accuracy and robustness on the QaTa-COV19 and MosMedData+ benchmarks, significantly outperforming existing methods even when trained with only 1% labeled data.

Technology Category

Application Category

📝 Abstract
Medical image segmentation is a cornerstone of computer-assisted diagnosis and treatment planning. While recent multimodal vision-language models have shown promise in enhancing semantic understanding through textual descriptions, their resilience in "in-the-wild" clinical settings-characterized by scarce annotations and hardware-induced image degradations-remains under-explored. We introduce BiCLIP (Bidirectional and Consistent Language-Image Processing), a framework engineered to bolster robustness in medical segmentation. BiCLIP features a bidirectional multimodal fusion mechanism that enables visual features to iteratively refine textual representations, ensuring superior semantic alignment. To further stabilize learning, we implement an augmentation consistency objective that regularizes intermediate representations against perturbed input views. Evaluation on the QaTa-COV19 and MosMedData+ benchmarks demonstrates that BiCLIP consistently surpasses state-of-the-art image-only and multimodal baselines. Notably, BiCLIP maintains high performance when trained on as little as 1% of labeled data and exhibits significant resistance to clinical artifacts, including motion blur and low-dose CT noise.
Problem

Research questions and friction points this paper is trying to address.

medical image segmentation
annotation scarcity
image degradation
clinical artifacts
robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

bidirectional multimodal fusion
augmentation consistency
robust medical segmentation
vision-language model
semantic alignment
S
Saivan Talaei
Department of Computer Engineering, University of Kurdistan, Iran
F
Fatemeh Daneshfar
Department of Computer Engineering, University of Kurdistan, Iran
Abdulhady Abas Abdullah
Abdulhady Abas Abdullah
Researcher in Artificial Intelligence UKH Centre
LLMPrompt EngineeringNLPLow Resource Languages
Mustaqeem Khan
Mustaqeem Khan
Assistant Professor, CIT-CSSE, UAEU
Computer VisionAffective ComputingEmotion Recognition