NeuralBoneReg: A Novel Self-Supervised Method for Robust and Accurate Multi-Modal Bone Surface Registration

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In computer-assisted orthopedic surgery, registration between preoperative medical images and intraoperative point clouds is challenging due to modality heterogeneity. To address this, we propose NeuralBoneReg, a self-supervised framework that pioneers the integration of implicit neural unsigned distance fields (UDFs) with self-supervised learning for modality-agnostic bone surface registration—without requiring cross-subject annotations. Our method employs 3D point clouds as a unified representation; an MLP implicitly models preoperative bone geometry and generates transformation hypotheses, enabling sequential global initialization followed by local refinement. NeuralBoneReg achieves robust cross-modal and cross-anatomical registration, substantially enhancing clinical applicability. Evaluated on UltraBones100k, UltraBones-Hip, and SpineDepth datasets, it achieves mean rotation/translation errors of 1.68°/1.86 mm, 1.88°/1.89 mm, and 3.79°/2.45 mm, respectively—outperforming state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
In computer- and robot-assisted orthopedic surgery (CAOS), patient-specific surgical plans derived from preoperative imaging define target locations and implant trajectories. During surgery, these plans must be accurately transferred, relying on precise cross-registration between preoperative and intraoperative data. However, substantial modality heterogeneity across imaging modalities makes this registration challenging and error-prone. Robust, automatic, and modality-agnostic bone surface registration is therefore clinically important. We propose NeuralBoneReg, a self-supervised, surface-based framework that registers bone surfaces using 3D point clouds as a modality-agnostic representation. NeuralBoneReg includes two modules: an implicit neural unsigned distance field (UDF) that learns the preoperative bone model, and an MLP-based registration module that performs global initialization and local refinement by generating transformation hypotheses to align the intraoperative point cloud with the neural UDF. Unlike SOTA supervised methods, NeuralBoneReg operates in a self-supervised manner, without requiring inter-subject training data. We evaluated NeuralBoneReg against baseline methods on two publicly available multi-modal datasets: a CT-ultrasound dataset of the fibula and tibia (UltraBones100k) and a CT-RGB-D dataset of spinal vertebrae (SpineDepth). The evaluation also includes a newly introduced CT--ultrasound dataset of cadaveric subjects containing femur and pelvis (UltraBones-Hip), which will be made publicly available. NeuralBoneReg matches or surpasses existing methods across all datasets, achieving mean RRE/RTE of 1.68°/1.86 mm on UltraBones100k, 1.88°/1.89 mm on UltraBones-Hip, and 3.79°/2.45 mm on SpineDepth. These results demonstrate strong generalizability across anatomies and modalities, providing robust and accurate cross-modal alignment for CAOS.
Problem

Research questions and friction points this paper is trying to address.

Addresses modality heterogeneity in bone surface registration for orthopedic surgery
Enables automatic cross-modal alignment between preoperative and intraoperative data
Provides robust registration without requiring inter-subject training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised framework using modality-agnostic point clouds
Implicit neural unsigned distance field for bone modeling
MLP-based registration with global initialization and local refinement
🔎 Similar Papers
2024-08-01Workshop on Biomedical Image RegistrationCitations: 2
L
Luohong Wu
Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Lengghalde 5, Zurich, 8008, Zurich, Switzerland
Matthias Seibold
Matthias Seibold
Research in Orthopedic Computer Science, Balgrist University Hospital, Zurich, Switzerland
Computer Assisted SurgeryAcoustic SensingComputer VisionMedical Augmented Reality
N
Nicola A. Cavalcanti
Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Lengghalde 5, Zurich, 8008, Zurich, Switzerland
Y
Yunke Ao
Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Lengghalde 5, Zurich, 8008, Zurich, Switzerland; AI Center, ETH Zurich, Ramistrasse 101, Zurich, 8092, Zurich, Switzerland
R
Roman Flepp
Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, Lengghalde 5, Zurich, 8008, Zurich, Switzerland
Aidana Massalimova
Aidana Massalimova
University Hospital Balgrist, University of Zurich
Lilian Calvet
Lilian Calvet
Postdoc in Computer Vision
computer visionmachine learningaugmented realitymedical imagingcomputer-assisted interventions
Philipp Fürnstahl
Philipp Fürnstahl
Prof. Dr. Universität Zürich