🤖 AI Summary
This work addresses the challenge in medical image synthesis caused by the absence of publicly available, fully paired, cross-cancer multimodal datasets, which hinders reliable translation from non-contrast to contrast-enhanced imaging. To bridge this gap, we present the first public pan-cancer multimodal imaging dataset encompassing 11 anatomical organs, featuring strictly anatomically aligned dynamic contrast-enhanced MRI (DCE1–DCE3) and paired CT/CTC images. We further establish a comprehensive benchmark supporting one-to-one, many-to-one, and many-to-many translation tasks. This dataset fills a critical void in non-brain, multi-phase, multimodal medical image synthesis. Extensive evaluation of state-of-the-art image translation models demonstrates their effectiveness in synthesizing contrast-enhanced images across multiple organs, thereby providing a foundational resource for safe and efficient clinical imaging workflows.
📝 Abstract
Contrast medium plays a pivotal role in radiological imaging, as it amplifies lesion conspicuity and improves detection for the diagnosis of tumor-related diseases. However, depending on the patient's health condition or the medical resources available, the use of contrast medium is not always feasible. Recent work has explored AI-based image translation to synthesize contrast-enhanced images directly from non-contrast scans, aims to reduce side effects and streamlines clinical workflows. Progress in this direction has been constrained by data limitations: (1) existing public datasets focus almost exclusively on brain-related paired MR modalities; (2) other collections include partially paired data but suffer from missing modalities/timestamps and imperfect spatial alignment; (3) explicit labeling of CT vs. CTC or DCE phases is often absent; (4) substantial resources remain private. To bridge this gap, we introduce the first public, fully paired, pan-cancer medical imaging dataset spanning 11 human organs. The MR data include complete dynamic contrast-enhanced (DCE) sequences covering all three phases (DCE1-DCE3), while the CT data provide paired non-contrast and contrast-enhanced acquisitions (CTC). The dataset is curated for anatomical correspondence, enabling rigorous evaluation of 1-to-1, N-to-1, and N-to-N translation settings (e.g., predicting DCE phases from non-contrast inputs). Built upon this resource, we establish a comprehensive benchmark. We report results from representative baselines of contemporary image-to-image translation. We release the dataset and benchmark to catalyze research on safe, effective contrast synthesis, with direct relevance to multi-organ oncology imaging workflows. Our code and dataset are publicly available at https://github.com/YifanChen02/PMPBench.