Iterative Diffusion-Refined Neural Attenuation Fields for Multi-Source Stationary CT Reconstruction: NAF Meets Diffusion Model

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe degradation in reconstruction quality and the failure of conventional interpolation methods for multi-source static CT under ultra-sparse-view (extremely low sampling rate) conditions, this paper proposes Diff-NAF: a novel framework that integrates neural attenuation fields (NAFs) with a dual-branch conditional diffusion model to establish an iterative sparse-projection completion paradigm. Its core components include an angular-prior-guided projection synthesis strategy and a diffusion-driven projection reuse-and-refinement module, augmented by a pseudo-label-based iterative training scheme. Evaluated on multiple 3D synthetic and real-world datasets, Diff-NAF consistently outperforms state-of-the-art methods, significantly improving reconstructed image quality, structural fidelity, and quantitative metrics (e.g., PSNR, SSIM) under ultra-sparse sampling. This work establishes a new paradigm for low-dose static CT imaging.

Technology Category

Application Category

📝 Abstract
Multi-source stationary computed tomography (CT) has recently attracted attention for its ability to achieve rapid image reconstruction, making it suitable for time-sensitive clinical and industrial applications. However, practical systems are often constrained by ultra-sparse-view sampling, which significantly degrades reconstruction quality. Traditional methods struggle under ultra-sparse-view settings, where interpolation becomes inaccurate and the resulting reconstructions are unsatisfactory. To address this challenge, this study proposes Diffusion-Refined Neural Attenuation Fields (Diff-NAF), an iterative framework tailored for multi-source stationary CT under ultra-sparse-view conditions. Diff-NAF combines a Neural Attenuation Field representation with a dual-branch conditional diffusion model. The process begins by training an initial NAF using ultra-sparse-view projections. New projections are then generated through an Angle-Prior Guided Projection Synthesis strategy that exploits inter view priors, and are subsequently refined by a Diffusion-driven Reuse Projection Refinement Module. The refined projections are incorporated as pseudo-labels into the training set for the next iteration. Through iterative refinement, Diff-NAF progressively enhances projection completeness and reconstruction fidelity under ultra-sparse-view conditions, ultimately yielding high-quality CT reconstructions. Experimental results on multiple simulated 3D CT volumes and real projection data demonstrate that Diff-NAF achieves the best performance under ultra-sparse-view conditions.
Problem

Research questions and friction points this paper is trying to address.

Enhancing CT reconstruction quality under ultra-sparse-view sampling constraints
Addressing inaccurate interpolation in multi-source stationary CT systems
Improving projection completeness for time-sensitive clinical applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Attenuation Fields representation for CT reconstruction
Dual-branch conditional diffusion model for refinement
Iterative projection synthesis and reuse enhancement
🔎 Similar Papers
No similar papers found.
J
Jiancheng Fang
School of Information Engineering, Nanchang University, Nanchang 330031, China
S
Shaoyu Wang
School of Information Engineering, Nanchang University, Nanchang 330031, China
Junlin Wang
Junlin Wang
Duke University
Computer ScienceNLP
Weiwen Wu
Weiwen Wu
Sun Yat-Sen University
Image reconstructiondeep learningcompressed sensingdiffusion model
Y
Yikun Zhang
Laboratory of Image Science and Technology, School of Computer Science and Engineering, and the Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, Ministry of Education, Southeast University, Nanjing 210096, China
Qiegen Liu
Qiegen Liu
Nanchang university
medical imagingimage processing