🤖 AI Summary
Fine-grained anatomical structure identification in fetal trunk standard-plane ultrasound images remains challenging due to inherently low contrast and blurred texture. Method: This paper proposes a lightweight multi-contrast fusion module that introduces a novel multi-contrast attention mechanism at the network’s lower layers. By adaptively weighting low-level features extracted under multiple contrast enhancements, the module significantly improves modeling of subtle anatomical details with negligible parameter overhead. It operates directly on raw ultrasound data to enhance feature representation of clinically critical regions. Contribution/Results: Evaluated on a fetal trunk standard-plane dataset, the method achieves substantial improvements in classification accuracy—particularly for key structures including the heart, spine, and stomach bubble—thereby enhancing diagnostic consistency and reliability. The approach demonstrates clear clinical utility in automated fetal anatomy assessment.
📝 Abstract
Purpose: Prenatal ultrasound is a key tool in evaluating fetal structural development and detecting abnormalities, contributing to reduced perinatal complications and improved neonatal survival. Accurate identification of standard fetal torso planes is essential for reliable assessment and personalized prenatal care. However, limitations such as low contrast and unclear texture details in ultrasound imaging pose significant challenges for fine-grained anatomical recognition. Methods: We propose a novel Multi-Contrast Fusion Module (MCFM) to enhance the model's ability to extract detailed information from ultrasound images. MCFM operates exclusively on the lower layers of the neural network, directly processing raw ultrasound data. By assigning attention weights to image representations under different contrast conditions, the module enhances feature modeling while explicitly maintaining minimal parameter overhead. Results: The proposed MCFM was evaluated on a curated dataset of fetal torso plane ultrasound images. Experimental results demonstrate that MCFM substantially improves recognition performance, with a minimal increase in model complexity. The integration of multi-contrast attention enables the model to better capture subtle anatomical structures, contributing to higher classification accuracy and clinical reliability. Conclusions: Our method provides an effective solution for improving fetal torso plane recognition in ultrasound imaging. By enhancing feature representation through multi-contrast fusion, the proposed approach supports clinicians in achieving more accurate and consistent diagnoses, demonstrating strong potential for clinical adoption in prenatal screening. The codes are available at https://github.com/sysll/MCFM.