🤖 AI Summary
This work addresses the limitations of multi-tracer PET imaging—namely high cost, radiation exposure, and tracer scarcity—as well as the shortcomings of existing MRI-to-PET synthesis methods, which often yield images with artifacts and insufficient pathological detail. To overcome these challenges, the authors propose a relativistic adversarial diffusion framework that leverages multi-sequence MRI inputs, specifically T1-weighted and T2-FLAIR, to synthesize high-fidelity PET images. At each diffusion time step, a relativistic adversarial loss with gradient penalty is introduced to enhance local structural realism through relative discrimination and stabilize training dynamics. Evaluated on two datasets, the method significantly outperforms current state-of-the-art approaches, achieving superior visual fidelity and quantitative performance in synthesizing multi-tracer PET images.
📝 Abstract
Multi-tracer positron emission tomography (PET) provides critical insights into diverse neuropathological processes such as tau accumulation, neuroinflammation, and $β$-amyloid deposition in the brain, making it indispensable for comprehensive neurological assessment. However, routine acquisition of multi-tracer PET is limited by high costs, radiation exposure, and restricted tracer availability. Recent efforts have explored deep learning approaches for synthesizing PET images from structural MRI. While some methods rely solely on T1-weighted MRI, others incorporate additional sequences such as T2-FLAIR to improve pathological sensitivity. However, existing methods often struggle to capture fine-grained anatomical and pathological details, resulting in artifacts and unrealistic outputs. To this end, we propose RelA-Diffusion, a Relativistic Adversarial Diffusion framework for multi-tracer PET synthesis from multi-sequence MRI. By leveraging both T1-weighted and T2-FLAIR scans as complementary inputs, RelA-Diffusion captures richer structural information to guide image generation. To improve synthesis fidelity, we introduce a gradient-penalized relativistic adversarial loss to the intermediate clean predictions of the diffusion model. This loss compares real and generated images in a relative manner, encouraging the synthesis of more realistic local structures. Both the relativistic formulation and the gradient penalty contribute to stabilizing the training, while adversarial feedback at each diffusion timestep enables consistent refinement throughout the generation process. Extensive experiments on two datasets demonstrate that RelA-Diffusion outperforms existing methods in both visual fidelity and quantitative metrics, highlighting its potential for accurate synthesis of multi-tracer PET.