🤖 AI Summary
In low-count 3D PET reconstruction, overly strong anatomical priors from MR often distort intrinsic PET functional features. To address this, we propose a personalized pretrained diffusion model framework. Leveraging multi-subject PET–MR paired data, we first generate subject-specific pseudo-PET images via image registration; these then guide lightweight personalization of a diffusion model. This work is the first to jointly integrate anatomy-guided pseudo-PET synthesis with personalized diffusion modeling—dynamically balancing MR-derived anatomical constraints and PET-specific functional representations—without requiring large-scale generative training. Evaluated on both simulated and real ¹⁸F-FDG datasets, our method significantly improves low-count reconstruction accuracy (higher PSNR and SSIM) and lesion detectability, while optimally preserving PET functional specificity and MR anatomical fidelity.
📝 Abstract
Recent work has shown improved lesion detectability and flexibility to reconstruction hyperparameters (e.g. scanner geometry or dose level) when PET images are reconstructed by leveraging pre-trained diffusion models. Such methods train a diffusion model (without sinogram data) on high-quality, but still noisy, PET images. In this work, we propose a simple method for generating subject-specific PET images from a dataset of multi-subject PET-MR scans, synthesizing"pseudo-PET"images by transforming between different patients' anatomy using image registration. The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features compared to the original set of PET images. With simulated and real [$^{18}$F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific"pseudo-PET"images improves reconstruction accuracy with low-count data. In particular, the method shows promise in combining information from a guidance MR scan without overly imposing anatomical features, demonstrating an improved trade-off between reconstructing PET-unique image features versus features present in both PET and MR. We believe this approach for generating and utilizing synthetic data has further applications to medical imaging tasks, particularly because patient-specific PET images can be generated without resorting to generative deep learning or large training datasets.