🤖 AI Summary
This work addresses the challenge of incomplete three-dimensional vertebral reconstruction in ultrasound imaging caused by acoustic occlusion and view dependency. The authors propose a label-free neural implicit shape completion method that couples the latent spaces of image appearance and anatomical shape to jointly model acoustic propagation characteristics and geometric occupancy fields. This approach directly recovers complete anatomical surfaces from partial ultrasound observations without requiring explicit occlusion labels or anatomical annotations during inference. Experimental results demonstrate that the method improves the HD95 metric by 80% on B-mode ultrasound images and achieves high-fidelity reconstruction of occluded structures with robust cross-condition generalization in both simulated and physical phantoms.
📝 Abstract
Accurate 3D reconstruction of vertebral anatomy from ultrasound is important for guiding minimally invasive spine interventions, but it remains challenging due to acoustic shadowing and view-dependent signal variations. We propose an occupancy-based shape completion method that reconstructs complete 3D anatomical geometry from partial ultrasound observations. Crucially for intra-operative applications, our approach extracts the anatomical surface directly from the image, avoiding the need for anatomical labels during inference. This label-free completion relies on a coupled latent space representing both the image appearance and the underlying anatomical shape. By leveraging a Neural Implicit Representation (NIR) that jointly models both spatial occupancy and acoustic interactions, the method uses acoustic parameters to become implicitly aware of the unseen regions without explicit shadowing labels through tracking acoustic signal transmission. We show that this method outperforms state-of-the-art shape completion for B-mode ultrasound by 80% in HD95 score. We validate our approach both in-silico and on phantom US images with registered mesh models from CT labels, demonstrating accurate reconstruction of occluded anatomy and robust generalization across diverse imaging conditions. Code and data will be released on publication.