đ¤ AI Summary
Reconstructing multi-view consistent 3D wound structures from 2D images for accurate segmentation remains a significant challenge. This work proposes a novel self-supervised framework that, for the first time, integrates Neural Radiance Fields (NeRF) with Signed Distance Functions (SDF) to eliminate the need for manual annotations. By leveraging NeRF-SDF modeling, the method generates geometrically consistent 3D representations of wounds and automatically derives high-quality segmentation labels, thereby enhancing multi-view consistency. Experimental results demonstrate that the proposed approach substantially outperforms both Vision Transformer-based models and conventional rasterization methods in segmentation accuracy, validating the effectiveness and innovation of the NeRF-SDF paradigm for medical image segmentation.
đ Abstract
Wound care is often challenged by the economic and logistical burdens that consistently afflict patients and hospitals worldwide. In recent decades, healthcare professionals have sought support from computer vision and machine learning algorithms. In particular, wound segmentation has gained interest due to its ability to provide professionals with fast, automatic tissue assessment from standard RGB images. Some approaches have extended segmentation to 3D, enabling more complete and precise healing progress tracking. However, inferring multi-view consistent 3D structures from 2D images remains a challenge. In this paper, we evaluate WoundNeRF, a NeRF SDF-based method for estimating robust wound segmentations from automatically generated annotations. We demonstrate the potential of this paradigm in recovering accurate segmentations by comparing it against state-of-the-art Vision Transformer networks and conventional rasterisation-based algorithms. The code will be released to facilitate further development in this promising paradigm.