đ¤ AI Summary
This work addresses the significant barriers to research and clinical translation of medical imaging AI modelsânamely, implementation heterogeneity, insufficient documentation, and poor reproducibilityâby introducing the first standardized containerized platform tailored for medical imaging AI. The platform natively supports DICOM processing, provides a unified API interface, embeds structured metadata, and integrates an interactive visualization dashboard. It encapsulates a suite of advanced models for segmentation, prediction, and feature extraction, enabling plug-and-play deployment, community collaboration, and reproducible results through publicly available reference datasets and end-to-end evaluation pipelines. Comparative experiments on lung segmentation demonstrate that the platform substantially lowers deployment barriers and enhances model validation efficiency and clinical applicability.
đ Abstract
Artificial intelligence (AI) has the potential to transform medical imaging by automating image analysis and accelerating clinical research. However, research and clinical use are limited by the wide variety of AI implementations and architectures, inconsistent documentation, and reproducibility issues. Here, we introduce MHub$.$ai, an open-source, container-based platform that standardizes access to AI models with minimal configuration, promoting accessibility and reproducibility in medical imaging. MHub$.$ai packages models from peer-reviewed publications into standardized containers that support direct processing of DICOM and other formats, provide a unified application interface, and embed structured metadata. Each model is accompanied by publicly available reference data that can be used to confirm model operation. MHub$.$ai includes an initial set of state-of-the-art segmentation, prediction, and feature extraction models for different modalities. The modular framework enables adaptation of any model and supports community contributions. We demonstrate the utility of the platform in a clinical use case through comparative evaluation of lung segmentation models. To further strengthen transparency and reproducibility, we publicly release the generated segmentations and evaluation metrics and provide interactive dashboards that allow readers to inspect individual cases and reproduce or extend our analysis. By simplifying model use, MHub$.$ai enables side-by-side benchmarking with identical execution commands and standardized outputs, and lowers the barrier to clinical translation.