🤖 AI Summary
Preoperative-to-intraoperative registration in augmented reality–guided hepatectomy remains challenging due to large, non-rigid liver deformations induced by pneumoperitoneum, respiration, and surgical instrument interactions.
Method: This paper proposes an end-to-end point cloud registration framework featuring a multi-resolution geometric feature extractor and a deformation-aware cross-attention module for hierarchical displacement prediction. It adopts a pure point cloud input architecture and is trained on biomechanically simulated synthetic data to enhance robustness against large deformations and noise.
Contribution/Results: The method achieves state-of-the-art performance on both synthetic and real-world datasets. Notably, we introduce and publicly release the first standardized benchmark dataset—comprising paired liver volume and surface point clouds—along with open-source code, enabling reproducible and standardized evaluation for liver registration in AR-guided surgery.
📝 Abstract
Non-rigid registration is essential for Augmented Reality guided laparoscopic liver surgery by fusing preoperative information, such as tumor location and vascular structures, into the limited intraoperative view, thereby enhancing surgical navigation. A prerequisite is the accurate prediction of intraoperative liver deformation which remains highly challenging due to factors such as large deformation caused by pneumoperitoneum, respiration and tool interaction as well as noisy intraoperative data, and limited field of view due to occlusion and constrained camera movement. To address these challenges, we introduce PIVOTS, a Preoperative to Intraoperative VOlume-To-Surface registration neural network that directly takes point clouds as input for deformation prediction. The geometric feature extraction encoder allows multi-resolution feature extraction, and the decoder, comprising novel deformation aware cross attention modules, enables pre- and intraoperative information interaction and accurate multi-level displacement prediction. We train the neural network on synthetic data simulated from a biomechanical simulation pipeline and validate its performance on both synthetic and real datasets. Results demonstrate superior registration performance of our method compared to baseline methods, exhibiting strong robustness against high amounts of noise, large deformation, and various levels of intraoperative visibility. We publish the training and test sets as evaluation benchmarks and call for a fair comparison of liver registration methods with volume-to-surface data. Code and datasets are available here https://github.com/pengliu-nct/PIVOTS.