🤖 AI Summary
Addressing the challenges of semantic similarity measurement, limited noise robustness, and poor cross-modality generalization in multimodal medical image registration, this paper proposes IMPACT—a generic, plug-and-play semantic loss function. IMPACT uniquely integrates deep anatomical features from pretrained large models (e.g., TotalSegmentator and SAM) into registration pipelines via differentiable, unsupervised contrastive learning in feature space, eliminating the need for task-specific fine-tuning. It can be seamlessly embedded into mainstream frameworks such as Elastix and Voxelmorph. Evaluated on five challenging cross-modality registration tasks—including thoracic/abdominal CT/CBCT and pelvic MR/CT—IMPACT reduces target registration error by 23.6% and improves Dice coefficient by 19.4% over baseline methods. The approach significantly enhances both registration accuracy and robustness to noise and modality mismatch, establishing a new state-of-the-art in unsupervised semantic-guided registration.
📝 Abstract
Image registration is fundamental in medical imaging, enabling precise alignment of anatomical structures for diagnosis, treatment planning, image-guided treatment or longitudinal monitoring. This work introduces IMPACT (Image Metric with Pretrained model-Agnostic Comparison for Transmodality registration), a generic semantic similarity metric designed for seamless integration into diverse image registration frameworks (such as Elastix and Voxelmorph). It compares deep learning-based features extracted from medical images without requiring task-specific training, ensuring broad applicability across various modalities. By leveraging the features of the large-scale pretrained TotalSegmentator models and the ability to integrate Segment Anything Model (SAM) and other large-scale segmentation networks, this approach offers significant advantages. It provides robust, scalable, and efficient solutions for multimodal image registration. The IMPACT loss was evaluated on five challenging registration tasks involving thoracic CT/CBCT, and pelvic MR/CT datasets. Quantitative metrics, such as Target Registration Error and Dice Similarity Coefficient, demonstrated significant improvements in anatomical alignment compared to baseline methods. Qualitative analyses further confirmed the increased robustness of the proposed metric in the face of noise, artifacts, and modality variations. IMPACT's versatility and efficiency make it a valuable tool for advancing registration performance in clinical and research applications, addressing critical challenges in multimodal medical imaging.