🤖 AI Summary
To address the clinical challenge posed by low-quality, non-standard fetal ultrasound images that fail to meet diagnostic requirements, this paper proposes an iterative counterfactual generation method based on conditional diffusion models. It is the first to integrate anatomical-constraint-guided sampling with a multi-scale quality-aware loss into a fetal ultrasound image optimization framework. The method progressively reconstructs raw images into high-fidelity, clinically compliant standard-plane images while preserving anatomical plausibility. Quantitative evaluation demonstrates significant improvements in PSNR and SSIM; physician blind assessment yields an 89% acceptance rate. This work establishes a novel, interpretable, and verifiable paradigm for ultrasound image quality assessment and provides effective support for teaching feedback optimization and enhanced diagnostic reliability.
📝 Abstract
Obstetric ultrasound image quality is crucial for accurate diagnosis and monitoring of fetal health. However, producing high-quality standard planes is difficult, influenced by the sonographer's expertise and factors like the maternal BMI or the fetus dynamics. In this work, we propose using diffusion-based counterfactual explainable AI to generate realistic high-quality standard planes from low-quality non-standard ones. Through quantitative and qualitative evaluation, we demonstrate the effectiveness of our method in producing plausible counterfactuals of increased quality. This shows future promise both for enhancing training of clinicians by providing visual feedback, as well as for improving image quality and, consequently, downstream diagnosis and monitoring.