Whole-Body Image-to-Image Translation for a Virtual Scanner in a Healthcare Digital Twin

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the degradation in synthesis accuracy for whole-body CT-to-PET translation caused by anatomical heterogeneity. We propose a region-specific generative adversarial network (GAN) framework that partitions the human body into three anatomically coherent regions—head, torso, and limbs—enabling region-wise modeling and coordinated synthesis. The method integrates multi-region semantic segmentation guidance with a hybrid learning strategy combining paired (Pix2Pix) and unpaired (CycleGAN) training paradigms to achieve high-fidelity PET image generation. The resulting “virtual PET scanner” substantially reduces patient radiation exposure and clinical scanning costs, facilitating functional imaging simulation in medical digital twin applications. Quantitative and qualitative evaluations demonstrate superior performance over global GAN baselines across regional, whole-body, and lesion-level metrics, establishing new state-of-the-art results for whole-body CT-to-PET translation.

Technology Category

Application Category

📝 Abstract
Generating positron emission tomography (PET) images from computed tomography (CT) scans via deep learning offers a promising pathway to reduce radiation exposure and costs associated with PET imaging, improving patient care and accessibility to functional imaging. Whole-body image translation presents challenges due to anatomical heterogeneity, often limiting generalized models. We propose a framework that segments whole-body CT images into four regions-head, trunk, arms, and legs-and uses district-specific Generative Adversarial Networks (GANs) for tailored CT-to-PET translation. Synthetic PET images from each region are stitched together to reconstruct the whole-body scan. Comparisons with a baseline non-segmented GAN and experiments with Pix2Pix and CycleGAN architectures tested paired and unpaired scenarios. Quantitative evaluations at district, whole-body, and lesion levels demonstrated significant improvements with our district-specific GANs. Pix2Pix yielded superior metrics, ensuring precise, high-quality image synthesis. By addressing anatomical heterogeneity, this approach achieves state-of-the-art results in whole-body CT-to-PET translation. This methodology supports healthcare Digital Twins by enabling accurate virtual PET scans from CT data, creating virtual imaging representations to monitor, predict, and optimize health outcomes.
Problem

Research questions and friction points this paper is trying to address.

Reduces radiation exposure and costs in PET imaging.
Addresses anatomical heterogeneity in whole-body image translation.
Enables accurate virtual PET scans for healthcare Digital Twins.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Segments CT images into four anatomical regions
Uses district-specific GANs for CT-to-PET translation
Stitches synthetic PET images for whole-body reconstruction
🔎 Similar Papers
No similar papers found.