EqDiff-CT: Equivariant Conditional Diffusion model for CT Image Synthesis from CBCT

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address Hounsfield unit (HU) inaccuracies and artifacts in cone-beam CT (CBCT) images caused by photon scatter and beam hardening, this paper proposes EqDiff-CT—a conditional diffusion model with C₄ rotational equivariance. Leveraging a group-equivariant U-Net architecture and steerable convolutions from e2cnn, the model enhances robust anatomical modeling—particularly for fine bony structures. Evaluated on the SynthRAD2025 dataset, EqDiff-CT achieves high-fidelity CBCT-to-CT synthesis, significantly improving HU accuracy, structural fidelity, soft-tissue boundary delineation, and realism of bone reconstruction. Compared to CycleGAN and standard denoising diffusion probabilistic models (DDPMs), EqDiff-CT demonstrates substantial gains in dose calculation accuracy and anatomical consistency. These advances provide high-quality synthetic CTs essential for adaptive radiotherapy planning and execution.

Technology Category

Application Category

📝 Abstract
Cone-beam computed tomography (CBCT) is widely used for image-guided radiotherapy (IGRT). It provides real time visualization at low cost and dose. However, photon scattering and beam hindrance cause artifacts in CBCT. These include inaccurate Hounsfield Units (HU), reducing reliability for dose calculation, and adaptive planning. By contrast, computed tomography (CT) offers better image quality and accurate HU calibration but is usually acquired offline and fails to capture intra-treatment anatomical changes. Thus, accurate CBCT-to-CT synthesis is needed to close the imaging-quality gap in adaptive radiotherapy workflows. To cater to this, we propose a novel diffusion-based conditional generative model, coined EqDiff-CT, to synthesize high-quality CT images from CBCT. EqDiff-CT employs a denoising diffusion probabilistic model (DDPM) to iteratively inject noise and learn latent representations that enable reconstruction of anatomically consistent CT images. A group-equivariant conditional U-Net backbone, implemented with e2cnn steerable layers, enforces rotational equivariance (cyclic C4 symmetry), helping preserve fine structural details while minimizing noise and artifacts. The system was trained and validated on the SynthRAD2025 dataset, comprising CBCT-CT scans across multiple head-and-neck anatomical sites, and we compared it with advanced methods such as CycleGAN and DDPM. EqDiff-CT provided substantial gains in structural fidelity, HU accuracy and quantitative metrics. Visual findings further confirm the improved recovery, sharper soft tissue boundaries, and realistic bone reconstructions. The findings suggest that the diffusion model has offered a robust and generalizable framework for CBCT improvements. The proposed solution helps in improving the image quality as well as the clinical confidence in the CBCT-guided treatment planning and dose calculations.
Problem

Research questions and friction points this paper is trying to address.

Synthesizing high-quality CT images from artifact-prone CBCT scans
Improving Hounsfield Unit accuracy for radiotherapy dose calculations
Addressing anatomical consistency in adaptive radiotherapy workflows
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion model synthesizes CT from CBCT images
Group-equivariant U-Net enforces rotational symmetry
Steerable CNN layers preserve structural details
🔎 Similar Papers
No similar papers found.
A
Alzahra Altalib
Department of Applied Medical Sciences, Jordan University of Science and Technology, Irbid, 21410, Jordan, and the School of Science and Engineering, University of Dundee, DD1 4HN, UK
C
Chunhui Li
School of Science and Engineering, University of Dundee, DD1 4HN, UK
Alessandro Perelli
Alessandro Perelli
Lecturer in Biomedical Engineering, University of Dundee (UK)
Machine LearningOptimizationImage/Signal processingComputed TomographyCompressive sensing