DX2CT: Diffusion Model for 3D CT Reconstruction from Bi or Mono-planar 2D X-ray(s)

๐Ÿ“… 2024-09-13
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address excessive radiation exposure in low-dose X-ray computed tomography (CT), this paper proposes a novel method for reconstructing high-fidelity 3D CT volumes from single- or dual-view X-ray projections. The method introduces three key innovations: (1) a position-aware 3D conditional diffusion model that explicitly encodes voxel-wise spatial coordinates; (2) a spatially modulated Transformer architecture that dynamically fuses 2D X-ray features with 3D positional priors to enhance slice-level conditional generation; and (3) multi-view geometric constraints coupled with a 2D-to-3D feature mapping mechanism to enforce anatomical consistency. Evaluated on multiple benchmark datasets, the approach significantly outperforms state-of-the-art methods, achieving superior structural fidelity, fine-detail preservation, and improved quantitative metricsโ€”including PSNR and SSIM. This work establishes a new paradigm for clinical low-dose 3D diagnostic imaging.

Technology Category

Application Category

๐Ÿ“ Abstract
Computational tomography (CT) provides high-resolution medical imaging, but it can expose patients to high radiation. X-ray scanners have low radiation exposure, but their resolutions are low. This paper proposes a new conditional diffusion model, DX2CT, that reconstructs three-dimensional (3D) CT volumes from bi or mono-planar X-ray image(s). Proposed DX2CT consists of two key components: 1) modulating feature maps extracted from two-dimensional (2D) X-ray(s) with 3D positions of CT volume using a new transformer and 2) effectively using the modulated 3D position-aware feature maps as conditions of DX2CT. In particular, the proposed transformer can provide conditions with rich information of a target CT slice to the conditional diffusion model, enabling high-quality CT reconstruction. Our experiments with the bi or mono-planar X-ray(s) benchmark datasets show that proposed DX2CT outperforms several state-of-the-art methods. Our codes and model will be available at: https://www.github.com/intyeger/DX2CT.
Problem

Research questions and friction points this paper is trying to address.

Radiology
X-ray Imaging
3D Reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

DX2CT
2D to 3D Reconstruction
Performance Superiority
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yun Su Jeong
Department of Electrical and Computer Engineering, Sungkyunkwan University, Republic of Korea
H
Hye Bin Yoo
Department of Electrical and Computer Engineering, Sungkyunkwan University, Republic of Korea
Il Yong Chun
Il Yong Chun
Associate Professor of EEE, AI, ECE, ADE, SCE, DCE, & CNIR, Sungkyunkwan University
Artificial intelligenceComputer visionComputational imaging