🤖 AI Summary
This work addresses the limited performance of existing vision-language foundation models—such as CLIP and BiomedCLIP—in fine-grained medical cross-modal retrieval tasks. The authors propose a multi-task learning framework built upon BiomedCLIP, incorporating a lightweight MLP projection head and a composite loss function that jointly optimizes binary classification (normal/abnormal), supervised contrastive learning, and CLIP alignment objectives. The approach significantly enhances bidirectional retrieval between chest X-ray images and radiology reports, outperforming baseline methods in both clinical relevance and semantic discriminability. t-SNE visualizations further demonstrate clearer clustering of cross-modal representations for normal versus abnormal cases, confirming the model’s improved semantic consistency and discriminative capability.
📝 Abstract
CLIP and BiomedCLIP are examples of vision-language foundation models and offer strong cross-modal embeddings; however, they are not optimized for fine-grained medical retrieval tasks, such as retrieving clinically relevant radiology reports using chest X-ray (CXR) image queries. To address this shortcoming, we propose a multi-task learning framework to fine-tune BiomedCLIP and evaluate improvements to CXR image-text retrieval. Using BiomedCLIP as the backbone, we incorporate a lightweight MLP projector head trained with a multi-task composite loss function that includes: (1) a binary cross-entropy loss to distinguish normal from abnormal CXR studies, (2) a supervised contrastive loss to reinforce intra-class consistency, and (3) a CLIP loss to maintain cross-modal alignment. Experimental results demonstrate that the fine-tuned model achieves more balanced and clinically meaningful performance across both image-to-text and text-to-image retrieval tasks compared to the pretrained BiomedCLIP and general-purpose CLIP models. Furthermore, t-SNE visualizations reveal clearer semantic clustering of normal and abnormal cases, demonstrating the model's enhanced diagnostic sensitivity. These findings highlight the value of domain-adaptive, multi-task learning for advancing cross-modal retrieval in biomedical applications.