GR-Diffusion: 3D Gaussian Representation Meets Diffusion in Whole-Body PET Reconstruction

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses noise amplification and detail loss in low-dose whole-body PET reconstruction caused by sparse sampling and the ill-posed nature of the inverse problem. It proposes the first framework that synergistically integrates 3D Gaussian representation (GR) with a diffusion model for PET image reconstruction. The method leverages GR to provide geometric priors, generating structurally coherent reference images, and introduces a dual-granularity, multi-scale hierarchical guidance mechanism to jointly enforce global consistency and recover fine local details during the diffusion process. By transcending the low-pass limitations inherent in conventional point-based or voxel-based approaches, the framework effectively reconstructs sub-voxel information. Experiments demonstrate that the model significantly outperforms state-of-the-art methods on both the UDPET and clinical datasets, consistently enhancing image quality across varying dose levels while preserving critical anatomical structures.

Technology Category

Application Category

📝 Abstract
Positron emission tomography (PET) reconstruction is a critical challenge in molecular imaging, often hampered by noise amplification, structural blurring, and detail loss due to sparse sampling and the ill-posed nature of inverse problems. The three-dimensional discrete Gaussian representation (GR), which efficiently encodes 3D scenes using parameterized discrete Gaussian distributions, has shown promise in computer vision. In this work, we pro-pose a novel GR-Diffusion framework that synergistically integrates the geometric priors of GR with the generative power of diffusion models for 3D low-dose whole-body PET reconstruction. GR-Diffusion employs GR to generate a reference 3D PET image from projection data, establishing a physically grounded and structurally explicit benchmark that overcomes the low-pass limitations of conventional point-based or voxel-based methods. This reference image serves as a dual guide during the diffusion process, ensuring both global consistency and local accuracy. Specifically, we employ a hierarchical guidance mechanism based on the GR reference. Fine-grained guidance leverages differences to refine local details, while coarse-grained guidance uses multi-scale difference maps to correct deviations. This strategy allows the diffusion model to sequentially integrate the strong geometric prior from GR and recover sub-voxel information. Experimental results on the UDPET and Clinical datasets with varying dose levels show that GR-Diffusion outperforms state-of-the-art methods in enhancing 3D whole-body PET image quality and preserving physiological details.
Problem

Research questions and friction points this paper is trying to address.

PET reconstruction
noise amplification
structural blurring
detail loss
low-dose imaging
Innovation

Methods, ideas, or system contributions that make the work stand out.

GR-Diffusion
3D Gaussian Representation
Diffusion Model
PET Reconstruction
Hierarchical Guidance
🔎 Similar Papers
No similar papers found.
M
Mengxiao Geng
School of Information Engineering, Nanchang University, Nanchang 330031, China
Zijie Chen
Zijie Chen
Westlake University
deep learning
R
Ran Hong
School of Information Engineering, Nanchang University, Nanchang 330031, China
Bingxuan Li
Bingxuan Li
UIUC
Qiegen Liu
Qiegen Liu
Nanchang university
medical imagingimage processing