π€ AI Summary
To address poor generalization of low-dose CT (LDCT) reconstruction models across unseen dose levels, this paper proposes the Noise-Enlightened Dual-domain Diffusion model (NEED). NEED is trained solely on normal-dose CT data and achieves cross-dose generalization via a cascaded architecture: Poisson-shifted diffusion denoising in the projection domain, followed by dual-guided diffusion refinement in the image domain. Its key innovations include a Poisson-shifted forward process explicitly matching the statistical characteristics of LDCT projection noise, a time-step alignment strategy, and a dual conditional guidance mechanism that jointly incorporates low-dose sinogram and initial reconstruction priors. Extensive experiments on two public benchmarks demonstrate that NEED consistently outperforms state-of-the-art methods in quantitative metrics (e.g., PSNR, SSIM), visual fidelity, and downstream segmentation performance. The source code is publicly available.
π Abstract
The generalization of deep learning-based low-dose computed tomography (CT) reconstruction models to doses unseen in the training data is important and remains challenging. Previous efforts heavily rely on paired data to improve the generalization performance and robustness through collecting either diverse CT data for re-training or a few test data for fine-tuning. Recently, diffusion models have shown promising and generalizable performance in low-dose CT (LDCT) reconstruction, however, they may produce unrealistic structures due to the CT image noise deviating from Gaussian distribution and imprecise prior information from the guidance of noisy LDCT images. In this paper, we propose a noise-inspired diffusion model for generalizable LDCT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain. First, we propose a novel shifted Poisson diffusion model to denoise projection data, which aligns the diffusion process with the noise model in pre-log LDCT projections. Second, we devise a doubly guided diffusion model to refine reconstructed images, which leverages LDCT images and initial reconstructions to more accurately locate prior information and enhance reconstruction fidelity. By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing via a time step matching strategy. Extensive qualitative, quantitative, and segmentation-based evaluations on two datasets demonstrate that our NEED consistently outperforms state-of-the-art methods in reconstruction and generalization performance. Source code is made available at https://github.com/qgao21/NEED.