Automated Interpretable 2D Video Extraction from 3D Echocardiography

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional cardiac ultrasound relies on manual acquisition of multiple 2D views, limiting comprehensive characterization of 3D anatomy. To address this, we propose a deep learning–guided method integrating medical prior knowledge for automatic standard view extraction from 3D echocardiographic volumes. First, a view classification network identifies key anatomical planes; then, anatomy-driven geometric reasoning—constrained by clinical heuristics and cardiac landmarks—precisely localizes and reconstructs spatially calibrated, diagnostically complete standard 2D video sequences. This work pioneers explainable, expert-knowledge–informed view generation, enabling seamless integration with downstream AI models (e.g., EchoNet-Measurement) and clinical quantification. Evaluated on 1,600 patient studies from two hospitals, our method achieves 96% view identification accuracy, and the generated videos meet clinical interpretability and quantitative analysis requirements. Code and a 29-case open-source 3D echocardiography dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Although the heart has complex three-dimensional (3D) anatomy, conventional medical imaging with cardiac ultrasound relies on a series of 2D videos showing individual cardiac structures. 3D echocardiography is a developing modality that now offers adequate image quality for clinical use, with potential to streamline acquisition and improve assessment of off-axis features. We propose an automated method to select standard 2D views from 3D cardiac ultrasound volumes, allowing physicians to interpret the data in their usual format while benefiting from the speed and usability of 3D scanning. Applying a deep learning view classifier and downstream heuristics based on anatomical landmarks together with heuristics provided by cardiologists, we reconstruct standard echocardiography views. This approach was validated by three cardiologists in blinded evaluation (96% accuracy in 1,600 videos from 2 hospitals). The downstream 2D videos were also validated in their ability to detect cardiac abnormalities using AI echocardiography models (EchoPrime and PanEcho) as well as ability to generate clinical-grade measurements of cardiac anatomy (EchoNet-Measurement). We demonstrated that the extracted 2D videos preserve spatial calibration and diagnostic features, allowing clinicians to obtain accurate real-world interpretations from 3D volumes. We release the code and a dataset of 29 3D echocardiography videos https://github.com/echonet/3d-echo .
Problem

Research questions and friction points this paper is trying to address.

Automatically extracts standard 2D views from 3D cardiac ultrasound volumes
Enables clinical interpretation using conventional 2D video formats
Preserves spatial calibration and diagnostic features from 3D data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning classifier selects standard 2D views
Anatomical landmarks guide automated view reconstruction
Extracted videos maintain spatial calibration for diagnosis
🔎 Similar Papers
No similar papers found.