🤖 AI Summary
Existing brain MRI registration methods struggle to effectively model long-range anatomical correspondences while ensuring the physical plausibility of deformation fields. To address this challenge, this work proposes a Transformer-based registration framework incorporating cycle-consistent inverse constraints and a Swin-UNet backbone architecture. The approach jointly optimizes forward and backward deformation fields, enabling simultaneous capture of local anatomical details and global spatial dependencies. Bidirectional consistency constraints are introduced to enforce invertibility and anatomical plausibility of the resulting deformations. Evaluated on a large-scale multicenter dataset comprising 2,851 T1-weighted brain MRIs, the proposed method significantly outperforms state-of-the-art approaches in terms of registration accuracy, stability, and anatomical consistency.
📝 Abstract
Deformable image registration plays a fundamental role in medical image analysis by enabling spatial alignment of anatomical structures across subjects. While recent deep learning-based approaches have significantly improved computational efficiency, many existing methods remain limited in capturing long-range anatomical correspondence and maintaining deformation consistency. In this work, we present a cycle inverse-consistent transformer-based framework for deformable brain MRI registration. The model integrates a Swin-UNet architecture with bidirectional consistency constraints, enabling the joint estimation of forward and backward deformation fields. This design allows the framework to capture both local anatomical details and global spatial relationships while improving deformation stability. We conduct a comprehensive evaluation of the proposed framework on a large multi-center dataset consisting of 2851 T1-weighted brain MRI scans aggregated from 13 public datasets. Experimental results demonstrate that the proposed framework achieves strong and balanced performance across multiple quantitative evaluation metrics while maintaining stable and physically plausible deformation fields. Detailed quantitative comparisons with baseline methods, including ANTs, ICNet, and VoxelMorph, are provided in the appendix. Experimental results demonstrate that CICTM achieves consistently strong performance across multiple evaluation criteria while maintaining stable and physically plausible deformation fields. These properties make the proposed framework suitable for large-scale neuroimaging datasets where both accuracy and deformation stability are critical.