🤖 AI Summary
Medical image analysis is hindered by data fragmentation, format heterogeneity, and inconsistent annotations, leading to high preprocessing costs and poor model reproducibility. To address these challenges, we introduce MedIMeta—the first standardized, off-the-shelf, multi-domain, multi-task medical imaging metadata set. MedIMeta unifies 19 publicly available datasets across 10 clinical imaging domains and 54 diagnostic tasks. It establishes a novel cross-domain, multi-task metadata paradigm, standardizing data formats, annotation protocols, and evaluation interfaces end-to-end. We further develop a robust data engineering pipeline featuring cross-modal normalization, task-semantic alignment encoding, and native PyTorch encapsulation, alongside benchmarks for both fully supervised and cross-domain few-shot learning. Experiments demonstrate that MedIMeta substantially lowers development barriers: model reproduction efficiency improves by over 3×, while performance remains stable and reproducible across diverse settings.
📝 Abstract
While the field of medical image analysis has undergone a transformative shift with the integration of machine learning techniques, the main challenge of these techniques is often the scarcity of large, diverse, and well-annotated datasets. Medical images vary in format, size, and other parameters and therefore require extensive preprocessing and standardization, for usage in machine learning. Addressing these challenges, we introduce the Medical Imaging Meta-Dataset (MedIMeta), a novel multi-domain, multi-task meta-dataset. MedIMeta contains 19 medical imaging datasets spanning 10 different domains and encompassing 54 distinct medical tasks, all of which are standardized to the same format and readily usable in PyTorch or other ML frameworks. We perform a technical validation of MedIMeta, demonstrating its utility through fully supervised and cross-domain few-shot learning baselines.