🤖 AI Summary
To address the limited accuracy and efficiency of implicit 3D CT reconstruction from ultra-sparse views—caused by neglecting anatomical priors—this paper proposes a target-prior-guided implicit reconstruction framework. Building upon Neural Radiance Fields (NeRF), our method integrates anatomical structural priors in the projection domain and employs joint position-structure encoding for voxel-level implicit modeling. A CUDA-accelerated prior estimation module is further designed to enhance computational efficiency. Our key innovation lies in explicitly embedding learnable anatomical priors into the implicit neural representation pipeline, thereby ensuring both physical consistency (via projection-domain constraints) and geometric plausibility (via anatomy-aware encoding). Experiments on abdominal CT data demonstrate that our method achieves a tenfold improvement in training efficiency over NAF. Compared to NeRP, it attains PSNR gains of 3.57 dB (10 views), 5.42 dB (20 views), and 5.70 dB (30 views).
📝 Abstract
X-ray imaging, based on penetration, enables detailed visualization of internal structures. Building on this capability, existing implicit 3D reconstruction methods have adapted the NeRF model and its variants for internal CT reconstruction. However, these approaches often neglect the significance of objects' anatomical priors for implicit learning, limiting both reconstruction precision and learning efficiency, particularly in ultra-sparse view scenarios. To address these challenges, we propose a novel 3D CT reconstruction framework that employs a 'target prior' derived from the object's projection data to enhance implicit learning. Our approach integrates positional and structural encoding to facilitate voxel-wise implicit reconstruction, utilizing the target prior to guide voxel sampling and enrich structural encoding. This dual strategy significantly boosts both learning efficiency and reconstruction quality. Additionally, we introduce a CUDA-based algorithm for rapid estimation of high-quality 3D target priors from sparse-view projections. Experiments utilizing projection data from a complex abdominal dataset demonstrate that the proposed model substantially enhances learning efficiency, outperforming the current leading model, NAF, by a factor of ten. In terms of reconstruction quality, it also exceeds the most accurate model, NeRP, achieving PSNR improvements of 3.57 dB, 5.42 dB, and 5.70 dB with 10, 20, and 30 projections, respectively. The code is available at https://github.com/qlcao171/TPG-INR.