GLFC: Unified Global-Local Feature and Contrast Learning with Mamba-Enhanced UNet for Synthetic CT Generation from CBCT

📅 2025-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address global structural distortion, local detail blurring, and insufficient multi-tissue contrast modeling in CBCT-to-synthetic CT (sCT) translation, this work proposes a Mamba-enhanced U-Net architecture: Mamba modules are embedded within skip connections to jointly capture long-range dependencies and local textures; additionally, a multi-window contrastive loss (MCL) is introduced to hierarchically enforce intensity consistency across soft-tissue and bone regions. The method synergistically integrates sequence modeling capabilities with encoder-decoder structural advantages for end-to-end supervised learning. Evaluated on the SynthRAD2023 dataset, the generated sCT achieves an SSIM of 91.50%, representing a 13.59-percentage-point improvement over raw CBCT and outperforming state-of-the-art CNN- and Transformer-based baselines. This study constitutes the first application of Mamba within skip connections for medical image synthesis and introduces a clinically motivated, window-specific contrastive loss—establishing a novel paradigm for low-dose CBCT–based precision radiotherapy.

Technology Category

Application Category

📝 Abstract
Generating synthetic Computed Tomography (CT) images from Cone Beam Computed Tomography (CBCT) is desirable for improving the image quality of CBCT. Existing synthetic CT (sCT) generation methods using Convolutional Neural Networks (CNN) and Transformers often face difficulties in effectively capturing both global and local features and contrasts for high-quality sCT generation. In this work, we propose a Global-Local Feature and Contrast learning (GLFC) framework for sCT generation. First, a Mamba-Enhanced UNet (MEUNet) is introduced by integrating Mamba blocks into the skip connections of a high-resolution UNet for effective global and local feature learning. Second, we propose a Multiple Contrast Loss (MCL) that calculates synthetic loss at different intensity windows to improve quality for both soft tissues and bone regions. Experiments on the SynthRAD2023 dataset demonstrate that GLFC improved the SSIM of sCT from 77.91% to 91.50% compared with the original CBCT, and significantly outperformed several existing methods for sCT generation. The code is available at https://github.com/intelland/GLFC
Problem

Research questions and friction points this paper is trying to address.

CBCT to CT-like image generation
feature preservation
contrast enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

MEUNet
MCL loss function
CBCT to CT image enhancement
🔎 Similar Papers
No similar papers found.