US-X Complete: A Multi-modal Approach to Anatomical 3D Shape Recovery

📅 2025-11-19
🏛️ ShapeMI@MICCAI
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Ultrasound enables real-time visualization of soft tissues and neurovascular structures during spinal surgery but suffers from bone-induced acoustic shadowing, limiting complete 3D anatomical depiction of vertebral bodies. Method: We propose a multimodal deep learning framework that reconstructs occluded vertebral anatomy in 3D by fusing a single lateral X-ray image with intraoperative 3D ultrasound data—without requiring preoperative CT registration. The method leverages the global bony structural prior from X-ray and the real-time soft-tissue information from ultrasound as complementary inputs, trained on simulated data for anatomical completion. Contribution/Results: Validation on physical phantoms demonstrates statistically significant improvement in vertebral reconstruction accuracy (p < 0.001). The approach enables more complete and geometrically accurate 3D lumbar visualization directly overlaid onto intraoperative ultrasound, effectively overcoming intrinsic limitations of ultrasound-based spinal imaging and advancing its utility in surgical navigation.

Technology Category

Application Category

📝 Abstract
Ultrasound offers a radiation-free, cost-effective solution for real-time visualization of spinal landmarks, paraspinal soft tissues and neurovascular structures, making it valuable for intraoperative guidance during spinal procedures. However, ultrasound suffers from inherent limitations in visualizing complete vertebral anatomy, in particular vertebral bodies, due to acoustic shadowing effects caused by bone. In this work, we present a novel multi-modal deep learning method for completing occluded anatomical structures in 3D ultrasound by leveraging complementary information from a single X-ray image. To enable training, we generate paired training data consisting of: (1) 2D lateral vertebral views that simulate X-ray scans, and (2) 3D partial vertebrae representations that mimic the limited visibility and occlusions encountered during ultrasound spine imaging. Our method integrates morphological information from both imaging modalities and demonstrates significant improvements in vertebral reconstruction (p<0.001) compared to state of art in 3D ultrasound vertebral completion. We perform phantom studies as an initial step to future clinical translation, and achieve a more accurate, complete volumetric lumbar spine visualization overlayed on the ultrasound scan without the need for registration with preoperative modalities such as computed tomography. This demonstrates that integrating a single X-ray projection mitigates ultrasound's key limitation while preserving its strengths as the primary imaging modality. Code and data can be found at https://github.com/miruna20/US-X-Complete
Problem

Research questions and friction points this paper is trying to address.

Completes occluded vertebral anatomy in 3D ultrasound using X-ray data
Overcomes ultrasound's bone shadowing limitations via multi-modal deep learning
Enables accurate spinal reconstruction without preoperative CT registration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal deep learning for 3D ultrasound completion
Leveraging single X-ray to overcome ultrasound limitations
Generating paired training data from simulated scans
🔎 Similar Papers
No similar papers found.