🤖 AI Summary
In AR-guided surgery, dynamic organ deformation causes misalignment between preoperative models and intraoperative anatomy; existing finite element method (FEM)-based approaches suffer from high computational cost and poor generalizability to large deformations induced by pneumoperitoneum or ligament dissection. This paper proposes a human-in-the-loop, prompt-driven deformation correction framework that synergistically integrates data-driven biomechanical modeling with surgeon-provided intraoperative semantic prompts. We design a finite-element-inspired neural network that preserves physical fidelity while accelerating computation by ~60%. Crucially, we introduce interactive prompt encoding into the biomechanical modeling closed loop and jointly optimize AR-space registration. In phantom experiments on liver tissue, target registration error is reduced to 2.78 mm—a 18.7% improvement over baseline methods. In vivo validation demonstrates significantly enhanced surface conformity, outperforming state-of-the-art approaches.
📝 Abstract
In augmented reality (AR)-guided surgical navigation, preoperative organ models are superimposed onto the patient's intraoperative anatomy to visualize critical structures such as vessels and tumors. Accurate deformation modeling is essential to maintain the reliability of AR overlays by ensuring alignment between preoperative models and the dynamically changing anatomy. While finite element (FE) methods offer physically plausible modeling, their high computational cost limits intraoperative applicability. Moreover, large anatomical changes-such as those induced by pneumoperitoneum or ligament dissection- often exceed the capabilities of existing algorithms, leading to inaccurate correspondences and compromised AR guidance. To address these challenges, we propose a data-driven biomechanics algorithm that preserves FE-level accuracy while reducing computational time by approximately 60%. In addition, we introduce an interactive human-in-the-loop framework that enables surgeons to provide immediate corrective prompts to refine deformation predictions, allowing the model to incorporate human expertise and adapt to complex surgical scenarios. Experiments on a public liver phantom dataset demonstrate that our algorithm achieves a mean target registration error of 3.42 mm. Incorporating surgeon prompts through the interactive framework further reduces the error to 2.78 mm, surpassing state-of-the-art methods in volumetric accuracy. In-vivo studies similarly show measurable improvements in surface alignment after incorporating surgeon prompts. These results highlight the ability of our framework to deliver accurate and efficient deformation modeling while enhancing surgeon-algorithm collaboration, paving the way for safer and more reliable computer-assisted surgeries.