Modality-Agnostic Input Channels Enable Segmentation of Brain lesions in Multimodal MRI with Sequences Unavailable During Training

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal brain MRI segmentation models rely on fixed modality combinations, limiting generalization to unseen modalities and often discarding modality-specific discriminative information. To address this, we propose a modality-agnostic U-Net architecture featuring parallel modality-agnostic and modality-specific input pathways, coupled with modality-aware data augmentation that synthesizes realistic virtual MRI contrast images. This design enables disentangled yet shared cross-modal feature learning. Our method is the first to achieve effective generalization to previously unseen modality combinations without compromising performance on seen modalities. We validate it across eight diverse datasets, five lesion types, and eight MRI modalities, demonstrating substantial improvements in robustness and adaptability under multicenter, multi-protocol clinical settings. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Segmentation models are important tools for the detection and analysis of lesions in brain MRI. Depending on the type of brain pathology that is imaged, MRI scanners can acquire multiple, different image modalities (contrasts). Most segmentation models for multimodal brain MRI are restricted to fixed modalities and cannot effectively process new ones at inference. Some models generalize to unseen modalities but may lose discriminative modality-specific information. This work aims to develop a model that can perform inference on data that contain image modalities unseen during training, previously seen modalities, and heterogeneous combinations of both, thus allowing a user to utilize any available imaging modalities. We demonstrate this is possible with a simple, thus practical alteration to the U-net architecture, by integrating a modality-agnostic input channel or pathway, alongside modality-specific input channels. To train this modality-agnostic component, we develop an image augmentation scheme that synthesizes artificial MRI modalities. Augmentations differentially alter the appearance of pathological and healthy brain tissue to create artificial contrasts between them while maintaining realistic anatomical integrity. We evaluate the method using 8 MRI databases that include 5 types of pathologies (stroke, tumours, traumatic brain injury, multiple sclerosis and white matter hyperintensities) and 8 modalities (T1, T1+contrast, T2, PD, SWI, DWI, ADC and FLAIR). The results demonstrate that the approach preserves the ability to effectively process MRI modalities encountered during training, while being able to process new, unseen modalities to improve its segmentation. Project code: https://github.com/Anthony-P-Addison/AGN-MOD-SEG
Problem

Research questions and friction points this paper is trying to address.

Segments brain lesions using unseen MRI modalities
Handles heterogeneous combinations of training and new modalities
Preserves performance on known modalities while processing new ones
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modality-agnostic input channels for segmentation
Synthetic MRI augmentation for training
Handles unseen modalities during inference
🔎 Similar Papers
No similar papers found.
A
Anthony P. Addison
Department of Engineering Science, University of Oxford, Oxford, UK
F
Felix Wagner
Department of Engineering Science, University of Oxford, Oxford, UK
W
Wentian Xu
Department of Engineering Science, University of Oxford, Oxford, UK
Natalie Voets
Natalie Voets
University of Oxford
MRINeuroimagingNeurosurgeryNeuroscienceNeurooncology
Konstantinos Kamnitsas
Konstantinos Kamnitsas
Associate Professor of Biomedical Imaging @ University of Oxford
Machine LearningBiomedical Image AnalysisComputer Vision