🤖 AI Summary
This work addresses the challenge of ultra-sparse-view cone-beam computed tomography (CBCT) reconstruction, where severe undersampling artifacts and inter-slice inconsistencies—stemming from extremely limited angular sampling—compromise diagnostic reliability. To overcome these limitations, the authors propose a neural prior–driven continuous 3D attenuation representation, integrated with a dual-path collaborative diffusion mechanism that jointly leverages sinogram (Sino-RD) and digital radiograph (DR-RD) data. A dual-projection reconstruction fusion (DPRF) module is introduced to simultaneously optimize angular continuity and inter-slice consistency. This approach is the first to preserve both 3D structural integrity and view continuity under ultra-sparse sampling conditions, effectively suppressing artifacts while recovering fine textural details. Quantitative and qualitative evaluations demonstrate superior reconstruction quality compared to current state-of-the-art methods.
📝 Abstract
The clinical application of cone-beam computed tomography (CBCT) is constrained by the inherent trade-off between radiation exposure and image quality. Ultra-sparse angular sampling, employed to reduce dose, introduces severe undersampling artifacts and inter-slice inconsistencies, compromising diagnostic reliability. Existing reconstruction methods often struggle to balance angular continuity with spatial detail fidelity. To address these challenges, we propose a Continuity-driven Synergistic Diffusion with Neural priors (CSDN) for ultra-sparse-view CBCT reconstruction. Neural priors are introduced as a structural foundation to encode a continuous threedimensional attenuation representation, enabling the synthesis of physically consistent dense projections from ultra-sparse measurements. Building upon this neural-prior-based initialization, a synergistic diffusion strategy is developed, consisting of two collaborative refinement paths: a Sinogram Refinement Diffusion (Sino-RD) process that restores angular continuity and a Digital Radiography Refinement Diffusion (DR-RD) process that enforces inter-slice consistency from the projection image perspective. The outputs of the two diffusion paths are adaptively fused by the Dual-Projection Reconstruction Fusion (DPRF) module to achieve coherent volumetric reconstruction. Extensive experiments demonstrate that the proposed CSDN effectively suppresses artifacts and recovers fine textures under ultra-sparse-view conditions, outperforming existing state-of-the-art techniques.