🤖 AI Summary
Medical imaging algorithm development has long been hindered by the scarcity of large-scale, pixel-level annotated real breast imaging data—particularly paired 2D digital mammography (DM) and 3D digital breast tomosynthesis (DBT) images. To address this, we propose T-SYNTH, the first framework that jointly integrates physics-based imaging models with biologically grounded anatomical priors to generate high-fidelity, semantically accurate synthetic DM-DBT image pairs alongside automatically generated pixel-level segmentation masks. Unlike conventional synthesis methods, T-SYNTH overcomes limitations in anatomical plausibility and cross-modal consistency. It enables the creation of the first open-source, multi-modal synthetic breast imaging dataset with comprehensive annotations. Extensive experiments demonstrate that models trained with T-SYNTH-enhanced data achieve significantly improved generalization performance on real clinical datasets, validating its effectiveness and practical utility as a data augmentation resource.
📝 Abstract
One of the key impediments for developing and assessing robust medical imaging algorithms is limited access to large-scale datasets with suitable annotations. Synthetic data generated with plausible physical and biological constraints may address some of these data limitations. We propose the use of physics simulations to generate synthetic images with pixel-level segmentation annotations, which are notoriously difficult to obtain. Specifically, we apply this approach to breast imaging analysis and release T-SYNTH, a large-scale open-source dataset of paired 2D digital mammography (DM) and 3D digital breast tomosynthesis (DBT) images. Our initial experimental results indicate that T-SYNTH images show promise for augmenting limited real patient datasets for detection tasks in DM and DBT. Our data and code are publicly available at https://github.com/DIDSR/tsynth-release.