On the Encapsulation of Medical Imaging AI Algorithms

📅 2025-04-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address persistent challenges in multi-center medical imaging AI collaboration—including difficult cross-institutional algorithm deployment, strong environmental dependencies, implicit data assumptions, and incomplete documentation leading to frequent manual intervention—this study systematically identifies comprehensive packaging requirements and reveals critical interoperability and reusability gaps in existing APIs and standards. We propose the first FAIR-aligned (particularly Interoperability and Reusability) packaging specification framework for medical imaging AI algorithms. Methodologically, we integrate domain ontologies (DICOM/SNOMED CT), Docker-based containerization, standardized interfaces (FHIR/MONAI Deploy), and FAIR data practices. Our contributions include an open-source packaging requirements checklist and evaluation matrix, which significantly reduce integration overhead in federated learning, multi-center validation, and clinical deployment. This work establishes foundational interoperability support for a sustainable medical AI algorithm ecosystem.

Technology Category

Application Category

📝 Abstract
In the context of collaborative AI research and development projects, it would be ideal to have self-contained encapsulated algorithms that can be easily shared between different parties, executed and validated on data at different sites, or trained in a federated manner. In practice, all of this is possible but greatly complicated, because human supervision and expert knowledge is needed to set up the execution of algorithms based on their documentation, possibly implicit assumptions, and knowledge about the execution environment and data involved. We derive and formulate a range of detailed requirements from the above goal and from specific use cases, focusing on medical imaging AI algorithms. Furthermore, we refer to a number of existing APIs and implementations and review which aspects each of them addresses, which problems are still open, and which public standards and ontologies may be relevant. Our contribution is a comprehensive collection of aspects that have not yet been addressed in their entirety by any single solution. Working towards the formulated goals should lead to more sustainable algorithm ecosystems and relates to the FAIR principles for research data, where this paper focuses on interoperability and (re)usability of medical imaging AI algorithms.
Problem

Research questions and friction points this paper is trying to address.

Encapsulating medical imaging AI algorithms for easy sharing
Addressing challenges in executing algorithms across different environments
Enhancing interoperability and reusability of AI algorithms in healthcare
Innovation

Methods, ideas, or system contributions that make the work stand out.

Encapsulated AI algorithms for easy sharing
Federated training across different sites
Comprehensive requirements for medical imaging AI
🔎 Similar Papers
No similar papers found.
H
Hans Meine
Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
Yongli Mou
Yongli Mou
Chair of Computer Science 5 - Information Systems and Databases, RWTH Aachen University
artificial intelligencedeep learninglarge language modelscomputer visionmultimodal models
G
Guido Prause
Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
H
Horst K. Hahn
Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany