🤖 AI Summary
To address the lack of a unified model for denoising PET images across diverse count levels, this paper proposes a dual-prompt-guided noise-conditioned diffusion denoising framework. Methodologically, we design an explicit count-level prompt—incorporating count priors—and an implicit universal denoising prompt—encoding intrinsic denoising knowledge—and introduce learnable prompt fusion and cross-layer prompt-feature interaction modules to dynamically modulate the U-Net backbone during denoising. Our key contribution is the first-of-its-kind decoupled prompt architecture that synergistically separates count-specific characteristics from denoising-invariant knowledge, enabling zero-shot adaptation of a single model to the full spectrum of count levels. Evaluated on 1,940 low-count tau-PET scans, our method achieves average improvements of 2.1 dB in PSNR and 0.032 in SSIM over state-of-the-art count-conditioned models, demonstrating superior generalizability and quantitative performance.
📝 Abstract
The to-be-denoised positron emission tomography (PET) volumes are inherent with diverse count levels, which imposes challenges for a unified model to tackle varied cases. In this work, we resort to the recently flourished prompt learning to achieve generalizable PET denoising with different count levels. Specifically, we propose dual prompts to guide the PET denoising in a divide-and-conquer manner, i.e., an explicitly count-level prompt to provide the specific prior information and an implicitly general denoising prompt to encode the essential PET denoising knowledge. Then, a novel prompt fusion module is developed to unify the heterogeneous prompts, followed by a prompt-feature interaction module to inject prompts into the features. The prompts are able to dynamically guide the noise-conditioned denoising process. Therefore, we are able to efficiently train a unified denoising model for various count levels, and deploy it to different cases with personalized prompts. We evaluated on 1940 low-count PET 3D volumes with uniformly randomly selected 13-22% fractions of events from 97 $^{18}$F-MK6240 tau PET studies. It shows our dual prompting can largely improve the performance with informed count-level and outperform the count-conditional model.