PIVOTS: Aligning unseen Structures using Preoperative to Intraoperative Volume-To-Surface Registration for Liver Navigation

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Preoperative-to-intraoperative registration in augmented reality–guided hepatectomy remains challenging due to large, non-rigid liver deformations induced by pneumoperitoneum, respiration, and surgical instrument interactions. Method: This paper proposes an end-to-end point cloud registration framework featuring a multi-resolution geometric feature extractor and a deformation-aware cross-attention module for hierarchical displacement prediction. It adopts a pure point cloud input architecture and is trained on biomechanically simulated synthetic data to enhance robustness against large deformations and noise. Contribution/Results: The method achieves state-of-the-art performance on both synthetic and real-world datasets. Notably, we introduce and publicly release the first standardized benchmark dataset—comprising paired liver volume and surface point clouds—along with open-source code, enabling reproducible and standardized evaluation for liver registration in AR-guided surgery.

Technology Category

Application Category

📝 Abstract
Non-rigid registration is essential for Augmented Reality guided laparoscopic liver surgery by fusing preoperative information, such as tumor location and vascular structures, into the limited intraoperative view, thereby enhancing surgical navigation. A prerequisite is the accurate prediction of intraoperative liver deformation which remains highly challenging due to factors such as large deformation caused by pneumoperitoneum, respiration and tool interaction as well as noisy intraoperative data, and limited field of view due to occlusion and constrained camera movement. To address these challenges, we introduce PIVOTS, a Preoperative to Intraoperative VOlume-To-Surface registration neural network that directly takes point clouds as input for deformation prediction. The geometric feature extraction encoder allows multi-resolution feature extraction, and the decoder, comprising novel deformation aware cross attention modules, enables pre- and intraoperative information interaction and accurate multi-level displacement prediction. We train the neural network on synthetic data simulated from a biomechanical simulation pipeline and validate its performance on both synthetic and real datasets. Results demonstrate superior registration performance of our method compared to baseline methods, exhibiting strong robustness against high amounts of noise, large deformation, and various levels of intraoperative visibility. We publish the training and test sets as evaluation benchmarks and call for a fair comparison of liver registration methods with volume-to-surface data. Code and datasets are available here https://github.com/pengliu-nct/PIVOTS.
Problem

Research questions and friction points this paper is trying to address.

Aligns preoperative liver data to intraoperative surfaces for navigation
Predicts liver deformation despite noise and limited visibility
Improves registration accuracy in laparoscopic surgery using neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses neural network for volume-to-surface registration
Multi-resolution feature extraction with geometric encoder
Deformation-aware cross attention modules in decoder
🔎 Similar Papers
No similar papers found.
P
Peng Liu
Translational Surgical Oncology, National Center for Tumor Diseases, Fetscherstrasse 74 /PF 64, Dresden, 01307, Saxony, Germany; German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Baden-Württemberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Dresden, 01307, Saxony, Germany; Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Saxony, Germany
B
Bianca Güttner
Translational Surgical Oncology, National Center for Tumor Diseases, Fetscherstrasse 74 /PF 64, Dresden, 01307, Saxony, Germany; German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Baden-Württemberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Dresden, 01307, Saxony, Germany; Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Saxony, Germany
Y
Yutong Su
Translational Surgical Oncology, National Center for Tumor Diseases, Fetscherstrasse 74 /PF 64, Dresden, 01307, Saxony, Germany; German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Baden-Württemberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Dresden, 01307, Saxony, Germany; Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Saxony, Germany
C
Chenyang Li
Translational Surgical Oncology, National Center for Tumor Diseases, Fetscherstrasse 74 /PF 64, Dresden, 01307, Saxony, Germany; German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Baden-Württemberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Dresden, 01307, Saxony, Germany; Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Saxony, Germany
J
Jinjing Xu
Translational Surgical Oncology, National Center for Tumor Diseases, Fetscherstrasse 74 /PF 64, Dresden, 01307, Saxony, Germany; German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Baden-Württemberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Dresden, 01307, Saxony, Germany; Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Saxony, Germany
M
Mingyang Liu
School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
Zhe Min
Zhe Min
Shandong University/University College London
Medical RoboticsRegistrationDeep Learning3D VisionComputer-Assisted Surgery
A
Andrey Zhylka
Surgical Department, The Netherlands Cancer Institute, Amsterdam, Netherlands
J
Jasper Smit
Surgical Department, The Netherlands Cancer Institute, Amsterdam, Netherlands
K
Karin Olthof
Surgical Department, The Netherlands Cancer Institute, Amsterdam, Netherlands
Matteo Fusaglia
Matteo Fusaglia
The Netherlands Cancer Institute
Biomedical Engineering
R
Rudi Apolle
Translational Imaging in Oncology, National Center for Tumor Diseases, Fetscherstrasse 74 /PF 64, Dresden, 01307, Saxony, Germany; Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, 01307, Saxony, Germany; German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Baden-Württemberg, Germany
M
Matthias Miederer
Translational Imaging in Oncology, National Center for Tumor Diseases, Fetscherstrasse 74 /PF 64, Dresden, 01307, Saxony, Germany; Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, 01307, Saxony, Germany; German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Baden-Württemberg, Germany
L
Laura Frohneberger
Faculty of Medicine and University Hospital Carl Gustav Carus, Dresden, 01307, Saxony, Germany
C
Carina Riediger
Faculty of Medicine and University Hospital Carl Gustav Carus, Dresden, 01307, Saxony, Germany
Jürgen Weitz
Jürgen Weitz
University Hospital TU Dresden
F
Fiona Kolbinger
Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, 01307, Saxony, Germany; Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
Stefanie Speidel
Stefanie Speidel
Professor, National Center for Tumor Diseases (NCT) Dresden
Computer- and robotic-assisted surgerySurgical data science
Micha Pfeiffer
Micha Pfeiffer
Researcher, NCT Dresden
Surgical AssistanceMachine Learning