🤖 AI Summary
To address challenges in spinal CT-to-biplanar X-ray registration—including spatial information loss, large domain gaps, poor noise robustness, and reliance on dense view acquisitions—this paper proposes RadGS-Reg, a novel end-to-end 3D/3D registration framework. It pioneers the integration of learning-based radiative Gaussians (RadGS) for 3D reconstruction with a counterfactual attention mechanism (CAL) to jointly optimize geometry and appearance alignment. A patient-specific pretraining strategy enables progressive domain adaptation from synthetic to real clinical data, while vertebral shape priors enforce anatomical consistency. Evaluated on a newly established clinical dataset, RadGS-Reg achieves state-of-the-art performance in both 3D reconstruction and registration accuracy, significantly improving robustness to image noise and computational efficiency (<100 ms per case). The implementation is publicly available.
📝 Abstract
Computed Tomography (CT)/X-ray registration in image-guided navigation remains challenging because of its stringent requirements for high accuracy and real-time performance. Traditional "render and compare" methods, relying on iterative projection and comparison, suffer from spatial information loss and domain gap. 3D reconstruction from biplanar X-rays supplements spatial and shape information for 2D/3D registration, but current methods are limited by dense-view requirements and struggles with noisy X-rays. To address these limitations, we introduce RadGS-Reg, a novel framework for vertebral-level CT/X-ray registration through joint 3D Radiative Gaussians (RadGS) reconstruction and 3D/3D registration. Specifically, our biplanar X-rays vertebral RadGS reconstruction module explores learning-based RadGS reconstruction method with a Counterfactual Attention Learning (CAL) mechanism, focusing on vertebral regions in noisy X-rays. Additionally, a patient-specific pre-training strategy progressively adapts the RadGS-Reg from simulated to real data while simultaneously learning vertebral shape prior knowledge. Experiments on in-house datasets demonstrate the state-of-the-art performance for both tasks, surpassing existing methods. The code is available at: https://github.com/shenao1995/RadGS_Reg.