🤖 AI Summary
Intraoperative pelvic X-ray imaging is highly susceptible to viewpoint variations caused by C-arm positioning and patient posture, degrading the accuracy of anatomical landmark detection. To address this, we propose an end-to-end U-Net framework that jointly performs 2D/3D registration and pose estimation. Our key innovation lies in incorporating both reprojection error of 3D landmarks and pose estimation loss into the training objective—explicitly modeling viewpoint deviations and enhancing robustness to non-standard anteroposterior (AP) views. Evaluated on a real-world intraoperative multi-pose dataset, our method significantly outperforms baseline U-Net and an ablated variant using pose loss alone, achieving a 21.3% reduction in mean localization error for critical anatomical landmarks. This improvement provides more reliable and adaptive anatomical localization support for fluoroscopy-guided orthopedic surgery.
📝 Abstract
Automated landmark detection offers an efficient approach for medical professionals to understand patient anatomic structure and positioning using intra-operative imaging. While current detection methods for pelvic fluoroscopy demonstrate promising accuracy, most assume a fixed Antero-Posterior view of the pelvis. However, orientation often deviates from this standard view, either due to repositioning of the imaging unit or of the target structure itself. To address this limitation, we propose a novel framework that incorporates 2D/3D landmark registration into the training of a U-Net landmark prediction model. We analyze the performance difference by comparing landmark detection accuracy between the baseline U-Net, U-Net trained with Pose Estimation Loss, and U-Net fine-tuned with Pose Estimation Loss under realistic intra-operative conditions where patient pose is variable.