🤖 AI Summary
To address severe degradation in reconstruction quality and the failure of conventional interpolation methods for multi-source static CT under ultra-sparse-view (extremely low sampling rate) conditions, this paper proposes Diff-NAF: a novel framework that integrates neural attenuation fields (NAFs) with a dual-branch conditional diffusion model to establish an iterative sparse-projection completion paradigm. Its core components include an angular-prior-guided projection synthesis strategy and a diffusion-driven projection reuse-and-refinement module, augmented by a pseudo-label-based iterative training scheme. Evaluated on multiple 3D synthetic and real-world datasets, Diff-NAF consistently outperforms state-of-the-art methods, significantly improving reconstructed image quality, structural fidelity, and quantitative metrics (e.g., PSNR, SSIM) under ultra-sparse sampling. This work establishes a new paradigm for low-dose static CT imaging.
📝 Abstract
Multi-source stationary computed tomography (CT) has recently attracted attention for its ability to achieve rapid image reconstruction, making it suitable for time-sensitive clinical and industrial applications. However, practical systems are often constrained by ultra-sparse-view sampling, which significantly degrades reconstruction quality. Traditional methods struggle under ultra-sparse-view settings, where interpolation becomes inaccurate and the resulting reconstructions are unsatisfactory. To address this challenge, this study proposes Diffusion-Refined Neural Attenuation Fields (Diff-NAF), an iterative framework tailored for multi-source stationary CT under ultra-sparse-view conditions. Diff-NAF combines a Neural Attenuation Field representation with a dual-branch conditional diffusion model. The process begins by training an initial NAF using ultra-sparse-view projections. New projections are then generated through an Angle-Prior Guided Projection Synthesis strategy that exploits inter view priors, and are subsequently refined by a Diffusion-driven Reuse Projection Refinement Module. The refined projections are incorporated as pseudo-labels into the training set for the next iteration. Through iterative refinement, Diff-NAF progressively enhances projection completeness and reconstruction fidelity under ultra-sparse-view conditions, ultimately yielding high-quality CT reconstructions. Experimental results on multiple simulated 3D CT volumes and real projection data demonstrate that Diff-NAF achieves the best performance under ultra-sparse-view conditions.