FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image Analysis

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fetal ultrasound image analysis is hindered by high image complexity and severe scarcity of cross-modal paired data, impeding effective representation learning and limiting generalization in foundation models. To address this, we introduce the first vision-language foundation model tailored for fetal ultrasound: (1) We release the largest publicly available fetal ultrasound image-text dataset to date, comprising 210,000 aligned pairs; (2) We propose an anatomy-guided cross-modal pretraining paradigm integrating the CLIP architecture with fetal-specific data augmentation and anatomical semantic alignment; (3) Our model achieves state-of-the-art performance across diverse downstream tasks—including fetal classification, gestational age estimation, congenital heart disease detection, and anatomical structure segmentation—outperforming prior methods by an average of 12.6% in few-shot settings, thereby significantly enhancing few-shot generalization capability.

Technology Category

Application Category

📝 Abstract
Foundation models are becoming increasingly effective in the medical domain, offering pre-trained models on large datasets that can be readily adapted for downstream tasks. Despite progress, fetal ultrasound images remain a challenging domain for foundation models due to their inherent complexity, often requiring substantial additional training and facing limitations due to the scarcity of paired multimodal data. To overcome these challenges, here we introduce FetalCLIP, a vision-language foundation model capable of generating universal representation of fetal ultrasound images. FetalCLIP was pre-trained using a multimodal learning approach on a diverse dataset of 210,035 fetal ultrasound images paired with text. This represents the largest paired dataset of its kind used for foundation model development to date. This unique training approach allows FetalCLIP to effectively learn the intricate anatomical features present in fetal ultrasound images, resulting in robust representations that can be used for a variety of downstream applications. In extensive benchmarking across a range of key fetal ultrasound applications, including classification, gestational age estimation, congenital heart defect (CHD) detection, and fetal structure segmentation, FetalCLIP outperformed all baselines while demonstrating remarkable generalizability and strong performance even with limited labeled data. We plan to release the FetalCLIP model publicly for the benefit of the broader scientific community.
Problem

Research questions and friction points this paper is trying to address.

Developing a model for fetal ultrasound analysis
Overcoming data scarcity in medical imaging
Enhancing accuracy in fetal health diagnostics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-language foundation model
Multimodal learning approach
Robust ultrasound image representation
🔎 Similar Papers
No similar papers found.
F
Fadillah Maani
Department of Computer Vision, Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE.
Numan Saeed
Numan Saeed
Mohamed Bin Zayed University of Artificial Intelligence
AI for MedicineMedical ImagingMachine Learning
T
Tausifa Saleem
Department of Computer Vision, Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE.
Z
Zaid Farooq
Department of Computer Vision, Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE.
Hussain Alasmawi
Hussain Alasmawi
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
Machine Learning
W
Werner Diehl
Corniche Hospital, Abu Dhabi Health Services Company (SEHA), Abu Dhabi, UAE.
A
Ameera Mohammad
Corniche Hospital, Abu Dhabi Health Services Company (SEHA), Abu Dhabi, UAE.
G
Gareth Waring
Corniche Hospital, Abu Dhabi Health Services Company (SEHA), Abu Dhabi, UAE.
S
Saudabi Valappi
Corniche Hospital, Abu Dhabi Health Services Company (SEHA), Abu Dhabi, UAE.
L
Leanne Bricker
Corniche Hospital, Abu Dhabi Health Services Company (SEHA), Abu Dhabi, UAE.
Mohammad Yaqub
Mohammad Yaqub
Researcher in Biomedical Engineering, Associate professor at MBZUAI
Artificial IntelligenceMedical Image AnalysisMachine LearningDeep learning