🤖 AI Summary
This work addresses the challenge of zero-shot multi-contrast brain MRI registration under domain shifts—such as those arising from high-field scanners, pathological brains, or unseen contrast types—when trained exclusively on T1-weighted MRI data. To enhance generalization without requiring target-domain training samples, the authors propose three efficient strategies: a MIND-based multimodal similarity loss, intensity randomization for data augmentation, and a lightweight instance-specific optimization (ISO) of the feature encoder during inference. These techniques collectively improve both the accuracy and anatomical plausibility of deformation fields across contrast domains. The method achieved top performance on the LUMIR25 challenge test set and demonstrated high registration accuracy and well-behaved deformations for T1-to-T2 registration on the validation set.
📝 Abstract
In this paper, we summarize the methods and results of our submission to the LUMIR25 challenge in Learn2Reg 2025, which achieved 1st place overall on the test set. Extended from LUMIR24, this year's task focuses on zero-shot registration under domain shifts (high-field MRI, pathological brains, and various MRI contrasts), while the training data comprise only in-domain T1-weighted brain MRI. We start with a meticulous analysis of LUMIR24 winners to identify the main contributors to good monomodal registration performance. To achieve good generalization with diverse contrasts from a model trained with T1-weighted MRI only, we employ three simple but effective strategies: (i) a multimodal loss based on the modality-independent neighborhood descriptor (MIND), (ii) intensity randomization for appearance augmentation, and (iii) lightweight instance-specific optimization (ISO) on feature encoders at inference time. On the validation set, our approach achieves reasonable T1-T2 registration accuracy while maintaining good deformation regularity.