Mixture of Multicenter Experts in Multimodal Generative AI for Advanced Radiotherapy Target Delineation

📅 2024-09-27
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Poor generalizability and regional bias of medical AI in radiotherapy target delineation stem from inter-institutional clinical practice heterogeneity. Method: We propose the first few-shot adaptive generative framework that requires no cross-institutional data sharing. It introduces a novel Multi-center Mixture of Experts (MoME) architecture, integrating multimodal generative AI, Mixture of Experts (MoE), and cross-site federated few-shot adaptation—enabling collaborative modeling of heterogeneous clinical knowledge using only small amounts of local imaging data and structured textual annotations per site. Results: Our method significantly outperforms baselines in prostate cancer target delineation; achieves the largest gains under severe distribution shift or extreme data scarcity; and enables rapid deployment of robust, personalized AI-assisted systems in resource-constrained healthcare settings.

Technology Category

Application Category

📝 Abstract
Clinical experts employ diverse philosophies and strategies in patient care, influenced by regional patient populations. However, existing medical artificial intelligence (AI) models are often trained on data distributions that disproportionately reflect highly prevalent patterns, reinforcing biases and overlooking the diverse expertise of clinicians. To overcome this limitation, we introduce the Mixture of Multicenter Experts (MoME) approach. This method strategically integrates specialized expertise from diverse clinical strategies, enhancing the AI model's ability to generalize and adapt across multiple medical centers. The MoME-based multimodal target volume delineation model, trained with few-shot samples including images and clinical notes from each medical center, outperformed baseline methods in prostate cancer radiotherapy target delineation. The advantages of MoME were most pronounced when data characteristics varied across centers or when data availability was limited, demonstrating its potential for broader clinical applications. Therefore, the MoME framework enables the deployment of AI-based target volume delineation models in resource-constrained medical facilities by adapting to specific preferences of each medical center only using a few sample data, without the need for data sharing between institutions. Expanding the number of multicenter experts within the MoME framework will significantly enhance the generalizability, while also improving the usability and adaptability of clinical AI applications in the field of precision radiation oncology.
Problem

Research questions and friction points this paper is trying to address.

Addressing AI bias in medical models without data sharing
Integrating diverse clinical expertise to enhance model generalizability
Enabling localized customization for radiotherapy target delineation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Multicenter Experts framework
Few-shot multimodal training approach
Cross-center generalization without data sharing
🔎 Similar Papers
No similar papers found.
Yujin Oh
Yujin Oh
Harvard Medical School & Massachusetts General Hospital
Medical Image AnalysisArtificial IntelligenceLarge Language ModelMultimodal AI
Sangjoon Park
Sangjoon Park
Department of Radiation Oncology, Yonsei University College of Medicine
Deep learningMedical ImagingRadiation Oncology
X
Xiang Li
Center for Advanced Medical Computing and Analysis, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
Wang Yi
Wang Yi
Professor of Embedded Systems, Uppsala University
Computer ScienceReal-Time SystemsEmbedded SystemsFormal MethodsVerification
J
Jonathan Paly
Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, USA
J
Jason Efstathiou
Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, USA
A
Annie Chan
Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, USA
J
Jun Won Kim
Department of Radiation Oncology, Gangnam Severance Hospital, Seoul, South Korea
H
H. Byun
Department of Radiation Oncology, Yongin Severance Hospital, Yongin, Gyeonggi-do, Korea
I
Ik-jae Lee
Department of Radiation Oncology, Yonsei University College of Medicine, Seoul, South Korea
Jaeho Cho
Jaeho Cho
Department of Radiation Oncology, Yonsei University College of Medicine, Seoul, South Korea
C
C. W. Wee
Department of Radiation Oncology, Yonsei University College of Medicine, Seoul, South Korea
P
Peng Shu
School of Computing, University of Georgia, GA, USA
Peilong Wang
Peilong Wang
City of Hope
PhysicsAIImaging
N
Nathan Yu
Department of Radiation Oncology, Mayo Clinic, AZ, USA
J
J. Holmes
Department of Radiation Oncology, Mayo Clinic, AZ, USA
Jong Chul Ye
Jong Chul Ye
Professor, Chung Moon Soul Chair, Graduate School of AI, KAIST
machine learningcomputational imagingmedical imagingsignal processingcompressed sensing
Quanzheng Li
Quanzheng Li
Massachusetts General Hospital, Harvard Medical School
Image ReconstructionMedical Image AnalysisDeep Learning in MedicineMultimodality Medical Data Analysis
W
Wei Liu
Department of Radiation Oncology, Mayo Clinic, AZ, USA
W
W. Koom
Department of Radiation Oncology, Yonsei University College of Medicine, Seoul, South Korea
J
Jin Sung Kim
Department of Radiation Oncology, Yonsei University College of Medicine, Seoul, South Korea
Kyungsang Kim
Kyungsang Kim
Assistant Professor at Harvard Medical School and Mass General Hospital
Deep learningLogical AICompressed sensingMedical imagingOptimization