🤖 AI Summary
Freehand ultrasound suffers from strong view dependency, acoustic shadowing that impedes accurate 3D anatomical reconstruction, and high reliance on precise manual annotations. Method: We propose UltrON—a novel framework employing occupancy-based implicit neural representations that jointly model signed distance functions (SDF) and occupancy fields. Crucially, it is the first to formulate unsupervised acoustic features from B-mode images as geometric constraints. To explicitly address view dependency, we introduce a multi-view consistency loss. The entire pipeline operates without any additional annotations, enabling robust weakly supervised optimization. Contribution/Results: Experiments demonstrate that UltrON significantly alleviates limitations imposed by occlusion and annotation sparsity, achieving improved 3D reconstruction accuracy across multiple anatomical structures and enhanced cross-scenario generalization capability compared to state-of-the-art methods.
📝 Abstract
In free-hand ultrasound imaging, sonographers rely on expertise to mentally integrate partial 2D views into 3D anatomical shapes. Shape reconstruction can assist clinicians in this process. Central to this task is the choice of shape representation, as it determines how accurately and efficiently the structure can be visualized, analyzed, and interpreted. Implicit representations, such as SDF and occupancy function, offer a powerful alternative to traditional voxel- or mesh-based methods by modeling continuous, smooth surfaces with compact storage, avoiding explicit discretization. Recent studies demonstrate that SDF can be effectively optimized using annotations derived from segmented B-mode ultrasound images. Yet, these approaches hinge on precise annotations, overlooking the rich acoustic information embedded in B-mode intensity. Moreover, implicit representation approaches struggle with the ultrasound's view-dependent nature and acoustic shadowing artifacts, which impair reconstruction. To address the problems resulting from occlusions and annotation dependency, we propose an occupancy-based representation and introduce gls{UltrON} that leverages acoustic features to improve geometric consistency in weakly-supervised optimization regime. We show that these features can be obtained from B-mode images without additional annotation cost. Moreover, we propose a novel loss function that compensates for view-dependency in the B-mode images and facilitates occupancy optimization from multiview ultrasound. By incorporating acoustic properties, gls{UltrON} generalizes to shapes of the same anatomy. We show that gls{UltrON} mitigates the limitations of occlusions and sparse labeling and paves the way for more accurate 3D reconstruction. Code and dataset will be available at https://github.com/magdalena-wysocki/ultron.