Deep Learning Analysis of Prenatal Ultrasound for Identification of Ventriculomegaly

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the early automated detection of fetal ventriculomegaly in prenatal ultrasound. We propose a fine-tuning approach based on the self-supervised pretrained ultrasound foundation model USF-MAE—its first application to binary classification of fetal ventricular dilation. The model employs a Masked Autoencoder architecture built upon Vision Transformer (ViT), pretrained on large-scale unlabeled ultrasound data, and integrates Eigen-CAM for clinically interpretable visualization. Evaluated via 5-fold cross-validation and an independent test set, it achieves an F1 score of 91.76% (CV) and 91.78% (test), with 97.24% accuracy—significantly outperforming baselines including VGG-19, ResNet-50, and ViT-B/16. This work demonstrates the effective transferability of USF-MAE to fetal neurosonographic diagnosis and establishes a high-accuracy, interpretable AI tool for early risk assessment of chromosomal abnormalities and genetic syndromes in prenatal care.

Technology Category

Application Category

📝 Abstract
The proposed study aimed to develop a deep learning model capable of detecting ventriculomegaly on prenatal ultrasound images. Ventriculomegaly is a prenatal condition characterized by dilated cerebral ventricles of the fetal brain and is important to diagnose early, as it can be associated with an increased risk for fetal aneuploidies and/or underlying genetic syndromes. An Ultrasound Self-Supervised Foundation Model with Masked Autoencoding (USF-MAE), recently developed by our group, was fine-tuned for a binary classification task to distinguish fetal brain ultrasound images as either normal or showing ventriculomegaly. The USF-MAE incorporates a Vision Transformer encoder pretrained on more than 370,000 ultrasound images from the OpenUS-46 corpus. For this study, the pretrained encoder was adapted and fine-tuned on a curated dataset of fetal brain ultrasound images to optimize its performance for ventriculomegaly detection. Model evaluation was conducted using 5-fold cross-validation and an independent test cohort, and performance was quantified using accuracy, precision, recall, specificity, F1-score, and area under the receiver operating characteristic curve (AUC). The proposed USF-MAE model reached an F1-score of 91.76% on the 5-fold cross-validation and 91.78% on the independent test set, with much higher scores than those obtained by the baseline models by 19.37% and 16.15% compared to VGG-19, 2.31% and 2.56% compared to ResNet-50, and 5.03% and 11.93% compared to ViT-B/16, respectively. The model also showed a high mean test precision of 94.47% and an accuracy of 97.24%. The Eigen-CAM (Eigen Class Activation Map) heatmaps showed that the model was focusing on the ventricle area for the diagnosis of ventriculomegaly, which has explainability and clinical plausibility.
Problem

Research questions and friction points this paper is trying to address.

Developing deep learning model to detect ventriculomegaly in prenatal ultrasound images
Fine-tuning self-supervised foundation model for fetal brain abnormality classification
Automating diagnosis of dilated cerebral ventricles using vision transformer architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned self-supervised model for ventriculomegaly classification
Used Vision Transformer pretrained on large ultrasound dataset
Achieved high accuracy with explainable ventricle-focused heatmaps
🔎 Similar Papers
No similar papers found.
Y
Youssef Megahed
Department of Systems and Computer Engineering, Carleton University, Ottawa, Ontario, Canada; Department of Methodological and Implementation Research, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
I
Inok Lee
Department of Acute Care Research, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
R
R. Ducharme
Department of Acute Care Research, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
A
Aylin Erman
Department of Acute Care Research, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada; Department of Clinical Science and Translational Medicine, University of Ottawa, Ottawa, Ontario, Canada
O
Olivier X. Miguel
Department of Acute Care Research, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
K
K. Dick
Children’s Hospital of Eastern Ontario Research Institute, Ottawa, Ontario, Canada; BORN Ontario, Children’s Hospital of Eastern, Ottawa, Ontario, Canada
Adrian D. C. Chan
Adrian D. C. Chan
Professor, Carleton University
biomedical signal processingbiomedical image processingmachine learningphysiological monitoringaccessibility
S
S. Hawken
Children’s Hospital of Eastern Ontario Research Institute, Ottawa, Ontario, Canada; Department of Methodological and Implementation Research, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada; Department of Clinical Science and Translational Medicine, University of Ottawa, Ottawa, Ontario, Canada
M
M. Walker
Department of Obstetrics and Gynecology, University of Ottawa, Ottawa, Ontario, Canada; Department of Acute Care Research, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada; Children’s Hospital of Eastern Ontario Research Institute, Ottawa, Ontario, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada; Department of Obstetrics, Gynecology & Newborn Care, The Ottawa Hospital, Ottawa, Ontario, Canada; International and Global Health Office, University of Ot
F
Felipe Moretti
Department of Acute Care Research, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada; Department of Obstetrics and Gynecology, University of Ottawa, Ottawa, Ontario, Canada; Department of Obstetrics, Gynecology & Newborn Care, The Ottawa Hospital, Ottawa, Ontario, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada