๐ค AI Summary
To address artifact propagation across multi-stage processing in synchrotron-based parallel-beam CT, this work proposes a stage-wise collaborative deep learning architecture. Customized U-Net variants are embedded at the projection, sinogram, and reconstruction stages, augmented by cross-stage residual bypass connections and a dual-path input mechanism integrating raw data and intermediate outputsโenabling localized, physics-informed, artifact-specific suppression. This is the first approach to jointly optimize artifact-type modeling fidelity and computational efficiency via physics-guided data augmentation and end-to-end joint fine-tuning. Evaluated on both simulated and real synchrotron CT data, the method achieves PSNR gains of 4.2โ6.8 dB and SSIM improvements of 0.07โ0.13 over baselines, significantly outperforming single-stage deep models and conventional denoising methods. Notably, it delivers superior suppression of ring, streak, and reconstruction artifacts.
๐ Abstract
Computed Tomography (CT) using synchrotron radiation is a powerful technique that, compared to lab-CT techniques, boosts high spatial and temporal resolution while also providing access to a range of contrast-formation mechanisms. The acquired projection data is typically processed by a computational pipeline composed of multiple stages. Artifacts introduced during data acquisition can propagate through the pipeline, and degrade image quality in the reconstructed images. Recently, deep learning has shown significant promise in enhancing image quality for images representing scientific data. This success has driven increasing adoption of deep learning techniques in CT imaging. Various approaches have been proposed to incorporate deep learning into computational pipelines, but each has limitations in addressing artifacts effectively and efficiently in synchrotron CT, either in properly addressing the specific artifacts, or in computational efficiency. Recognizing these challenges, we introduce a novel method that incorporates separate deep learning models at each stage of the tomography pipeline-projection, sinogram, and reconstruction-to address specific artifacts locally in a data-driven way. Our approach includes bypass connections that feed both the outputs from previous stages and raw data to subsequent stages, minimizing the risk of error propagation. Extensive evaluations on both simulated and real-world datasets illustrate that our approach effectively reduces artifacts and outperforms comparison methods.