🤖 AI Summary
Deep learning-based deformable registration methods suffer from limited generalizability, particularly on unseen MRI contrasts and cross-modal image pairs. To address this challenge in the Learn2Reg 2025 LUMIR brain registration benchmark, we propose a novel framework designed for cross-modal robustness. First, input images are mapped into the MIND (Modality Independent Neighborhood Descriptor) feature space to mitigate modality- and contrast-induced discrepancies. Second, we introduce a consistency-constrained multi-model ensemble strategy to enhance prediction stability and out-of-distribution generalization. Our approach integrates MIND feature extraction, a deep deformable registration network, and an improved ensemble mechanism. Evaluated on unseen contrasts—including MP2RAGE and QSM—and challenging cross-modal combinations, our method significantly outperforms baseline approaches. Results demonstrate superior generalizability and clinical applicability, establishing a new state-of-the-art for robust, modality-agnostic brain image registration.
📝 Abstract
Deep learning based deformable registration methods have become popular in recent years. However, their ability to generalize beyond training data distribution can be poor, significantly hindering their usability. LUMIR brain registration challenge for Learn2Reg 2025 aims to advance the field by evaluating the performance of the registration on contrasts and modalities different from those included in the training set. Here we describe our submission to the challenge, which proposes a very simple idea for significantly improving robustness by transforming the images into MIND feature space before feeding them into the model. In addition, a special ensembling strategy is proposed that shows a small but consistent improvement.