🤖 AI Summary
This work proposes a latent-space Brownian bridge matching framework to efficiently synthesize contrast-enhanced MRI images with realistic tumor contrast characteristics from non-contrast MRI scans without the need for gadolinium-based contrast agents. By introducing bridge processes into medical image generation for the first time, the method integrates a tumor-biased attention mechanism (TuBAM) and a boundary-aware loss function to significantly enhance synthesis fidelity and edge sharpness in tumor regions. Evaluated on the BraTS2023-GLI and liver MRI datasets, the approach outperforms existing methods, demonstrates strong zero-shot generalization capability, and achieves inference times under 0.097 seconds per image, striking an effective balance among synthesis quality, computational efficiency, and accurate tumor detail preservation.
📝 Abstract
Contrast-enhanced magnetic resonance imaging (CE-MRI) plays a crucial role in brain tumor assessment; however, its acquisition requires gadolinium-based contrast agents (GBCAs), which increase costs and raise safety concerns. Consequently, synthesizing CE-MRI from non-contrast MRI (NC-MRI) has emerged as a promising alternative. Early Generative Adversarial Network (GAN)-based approaches suffered from instability and mode collapse, while diffusion models, despite impressive synthesis quality, remain computationally expensive and often fail to faithfully reproduce critical tumor contrast patterns. To address these limitations, we propose Tumor-Biased Latent Bridge Matching (TuLaBM), which formulates NC-to-CE MRI translation as Brownian bridge transport between source and target distributions in a learned latent space, enabling efficient training and inference. To enhance tumor-region fidelity, we introduce a Tumor-Biased Attention Mechanism (TuBAM) that amplifies tumor-relevant latent features during bridge evolution, along with a boundary-aware loss that constrains tumor interfaces to improve margin sharpness. While bridge matching has been explored for medical image translation in pixel space, our latent formulation substantially reduces computational cost and inference time. Experiments on BraTS2023-GLI (BraSyn) and Cleveland Clinic (in-house) liver MRI dataset show that TuLaBM consistently outperforms state-of-the-art baselines on both whole-image and tumor-region metrics, generalizes effectively to unseen liver MRI data in zero-shot and fine-tuned settings, and achieves inference times under 0.097 seconds per image.