crossMoDA Challenge: Evolution of Cross-Modality Domain Adaptation Techniques for Vestibular Schwannoma and Cochlea Segmentation from 2021 to 2023

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the domain adaptation challenge in unsupervised cross-modal medical image segmentation, specifically for vestibular schwannoma (VS) and cochlea segmentation from T1- to T2-weighted MRI. We propose a multi-stage progressive benchmark paradigm—from single- to multi-center settings, from whole-tumor annotation to Koos grading, and further to intra-/extra-tumoral subregion delineation—to systematically evaluate how data heterogeneity enhances model robustness. Our method integrates both paired and unpaired multi-institutional T1/T2 MRI data within an unsupervised domain adaptation framework, enabling generalization across scanning protocols and imaging devices. Evaluated on the 2023 international challenge, our approach achieved first place: it significantly reduced outliers on the cross-year test set and delivered stable VS segmentation performance. Although cochlear Dice scores slightly decreased due to the complexity of subregional annotations, the overall framework advances low-cost, clinically deployable VS management.

Technology Category

Application Category

📝 Abstract
The cross-Modality Domain Adaptation (crossMoDA) challenge series, initiated in 2021 in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), focuses on unsupervised cross-modality segmentation, learning from contrast-enhanced T1 (ceT1) and transferring to T2 MRI. The task is an extreme example of domain shift chosen to serve as a meaningful and illustrative benchmark. From a clinical application perspective, it aims to automate Vestibular Schwannoma (VS) and cochlea segmentation on T2 scans for more cost-effective VS management. Over time, the challenge objectives have evolved to enhance its clinical relevance. The challenge evolved from using single-institutional data and basic segmentation in 2021 to incorporating multi-institutional data and Koos grading in 2022, and by 2023, it included heterogeneous routine data and sub-segmentation of intra- and extra-meatal tumour components. In this work, we report the findings of the 2022 and 2023 editions and perform a retrospective analysis of the challenge progression over the years. The observations from the successive challenge contributions indicate that the number of outliers decreases with an expanding dataset. This is notable since the diversity of scanning protocols of the datasets concurrently increased. The winning approach of the 2023 edition reduced the number of outliers on the 2021 and 2022 testing data, demonstrating how increased data heterogeneity can enhance segmentation performance even on homogeneous data. However, the cochlea Dice score declined in 2023, likely due to the added complexity from tumour sub-annotations affecting overall segmentation performance. While progress is still needed for clinically acceptable VS segmentation, the plateauing performance suggests that a more challenging cross-modal task may better serve future benchmarking.
Problem

Research questions and friction points this paper is trying to address.

Automate Vestibular Schwannoma and cochlea segmentation on T2 MRI scans
Address domain shift in unsupervised cross-modality medical image segmentation
Enhance clinical relevance with evolving datasets and segmentation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised cross-modality segmentation learning
Multi-institutional data integration
Heterogeneous routine data utilization
🔎 Similar Papers
No similar papers found.
N
Navodini Wijethilake
School of BMEIS, King’s College London, London, United Kingdom
Reuben Dorent
Reuben Dorent
Inria
Machine LearningDeep LearningMedical Image Analysis
M
Marina Ivory
School of BMEIS, King’s College London, London, United Kingdom
Aaron Kujawa
Aaron Kujawa
Research Associate, King's College London
Medical ImagingDeep Learning
S
Stefan Cornelissen
Elisabeth-TweeSteden Hospital, Tilburg, Netherlands
P
Patrick Langenhuizen
Elisabeth-TweeSteden Hospital, Tilburg, Netherlands
Mohamed Okasha
Mohamed Okasha
Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands
A
Anna Oviedova
King’s College Hospital, London, United Kingdom
Hexin Dong
Hexin Dong
Postdoctoral Associate at Weill Cornell Medicine
B
Bogyeong Kang
Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
G
Guillaume Sall'e
UMR 1101 Inserm LaTIM, Universit´e de Bretagne Occidentale, IMT Atlantique, Brest, France
Luyi Han
Luyi Han
Radboud University Medical Center, Netherlands Cancer Institute
Medical Image Analysis
Ziyuan Zhao
Ziyuan Zhao
Harvard University
H
Han Liu
Vanderbilt University, USA
T
Tao Yang
Department of Automation, Shanghai Jiao Tong University, Shanghai, China
Shahad Hardan
Shahad Hardan
PhD in Machine Learning
AI in Healthcare
Hussain Alasmawi
Hussain Alasmawi
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
Machine Learning
Santosh Sanjeev
Santosh Sanjeev
Technology Innovation Institute
MultimodalityVision Language ModelsAI for healthcareGnerative AI
Y
Yuzhou Zhuang
School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China
Satoshi Kondo
Satoshi Kondo
Muroran Institute of Technology (formerly, Konica Minolta, Inc., Panasonic corp.)
Computer vision
M
Maria G. Baldeon Calisto
Universidad San Francisco de Quito, Diego de Robles s/n y Via Interoceánica, Quito, Ecuador
S
Shaikh M. Noman
Ludwig-Maximilians-Universität München, Germany
C
Cancan Chen
Infervision Advanced Research Institute, Beijing, China
Ipek Oguz
Ipek Oguz
Vanderbilt University
Medical image computingmedical image analysissegmentationimage registrationrodent imaging
R
Rongguo Zhang
Infervision Advanced Research Institute, Beijing, China
M
Mina Rezaei
University of South Florida, Tampa, FL, USA
S
Susana K. Lai-Yuen
University of South Florida, Tampa, FL, USA
S
Satoshi Kasai
Niigata University of Health and Welfare, Niigata, Japan
C
Chih-Cheng Hung
Center for Machine Vision and Security Research, Kennesaw State University, Marietta, MA 30060, USA
Mohammad Yaqub
Mohammad Yaqub
Researcher in Biomedical Engineering, Associate professor at MBZUAI
Artificial IntelligenceMedical Image AnalysisMachine LearningDeep learning
L
Lisheng Wang
Department of Automation, Shanghai Jiao Tong University, Shanghai, China
B
Benoit M. Dawant
Vanderbilt University, USA
Cuntai Guan
Cuntai Guan
President's Chair Professor, CCDS, Nanyang Technological University
Brain-Computer InterfaceBrain-Computer InterfacesMachine LearningArtificial Intelligence
Ritse Mann
Ritse Mann
Breast and Interventional Radiologist, Radboudumc
radiologie
Vincent Jaouen
Vincent Jaouen
Maître de conférences (Associate Prof.) - IMT Atlantique, Inserm LaTIM
image processingartificial intelligencecancer imaging
J
Jiwan Han
UMR 1101 Inserm LaTIM, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
L
Li Zhang
Center for Data Science in Health and Medicine, Peking University, Beijing, China
Jonathan Shapey
Jonathan Shapey
King's College London
Tom Vercauteren
Tom Vercauteren
Professor of Interventional Image Computing, King's College London
Medical Image ComputingImage RegistrationComputer-assisted InterventionsEndomicroscopyImage-guided Interventions