Regional quality estimation for echocardiography using deep learning

📅 2024-08-01
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing cardiac ultrasound image quality assessment methods yield only global scores, failing to distinguish between anatomical view correctness and intrinsic image quality, and lacking support for segment-level analysis—limiting their clinical utility. To address this, we propose the first myocardial-segment-oriented regional quality assessment paradigm. Our method integrates U-Net–based segmentation, generalized contrast-to-noise ratio (gCNR) computation, B-mode local coherence modeling, and an end-to-end convolutional network to enable fine-grained, interpretable quality quantification. Evaluated against expert annotations, our approach achieves a Spearman correlation coefficient of 0.69—surpassing conventional metrics and approaching the inter-observer agreement among three cardiologists (0.63). The framework has been open-sourced as part of the arqee toolkit, enabling real-time sonographer guidance and enhancing the reliability of quantitative echocardiographic measurements.

Technology Category

Application Category

📝 Abstract
Automatic estimation of cardiac ultrasound image quality can be beneficial for guiding operators and ensuring the accuracy of clinical measurements. Previous work often fails to distinguish the view correctness of the echocardiogram from the image quality. Additionally, previous studies only provide a global image quality value, which limits their practical utility. In this work, we developed and compared three methods to estimate image quality: 1) classic pixel-based metrics like the generalized contrast-to-noise ratio (gCNR) on myocardial segments as region of interest and left ventricle lumen as background, obtained using a U-Net segmentation 2) local image coherence derived from a U-Net model that predicts coherence from B-Mode images 3) a deep convolutional network that predicts the quality of each region directly in an end-to-end fashion. We evaluate each method against manual regional image quality annotations by three experienced cardiologists. The results indicate poor performance of the gCNR metric, with Spearman correlation to the annotations of rho = 0.24. The end-to-end learning model obtains the best result, rho = 0.69, comparable to the inter-observer correlation, rho = 0.63. Finally, the coherence-based method, with rho = 0.58, outperformed the classical metrics and is more generic than the end-to-end approach. The image quality prediction tool is available as an open source Python library at https://github.com/GillesVanDeVyver/arqee.
Problem

Research questions and friction points this paper is trying to address.

Distinguish echocardiogram view correctness from image quality
Provide regional image quality estimation instead of global
Evaluate deep learning methods for automatic quality assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

U-Net segmentation for myocardial regions
Local image coherence via U-Net prediction
End-to-end convolutional network quality prediction
🔎 Similar Papers
No similar papers found.
Gilles Van De Vyver
Gilles Van De Vyver
Norwegian university of science and technology
AIcomputer visionUltrasound
S
Svein-Erik Måsøy
Norwegian University of Science and Technology, Trondheim, Norway
H
Håvard Dalen
Norwegian University of Science and Technology, Trondheim, Norway; St. Olavs hospital, Trondheim, Norway
B
Bjørnar Leangen Grenne
Norwegian University of Science and Technology, Trondheim, Norway; St. Olavs hospital, Trondheim, Norway
E
Espen Holte
Norwegian University of Science and Technology, Trondheim, Norway; St. Olavs hospital, Trondheim, Norway
S
Sindre Hellum Olaisen
Norwegian University of Science and Technology, Trondheim, Norway
J
John Nyberg
Norwegian University of Science and Technology, Trondheim, Norway
Andreas Østvik
Andreas Østvik
Senior Researcher Scientist SINTEF, Norwegian University of Science and Technology
Medical ImagingMachine LearningRobotics
L
Lasse Løvstakken
Norwegian University of Science and Technology, Trondheim, Norway
Erik Smistad
Erik Smistad
Norwegian University of Science and Technology and SINTEF Medical image analysis
medical imagingimage segmentationGPUultrasounddeep learning