🤖 AI Summary
This work addresses the challenge of establishing semantically consistent spatial correspondences across images in medical image registration. We propose a novel, training-free, annotation-free paradigm that diverges from conventional approaches—such as predicting displacement fields or transformation parameters—by leveraging pre-trained multimodal models (e.g., GroundingDINO and SAM). Given identical textual prompts (e.g., “prostate”), the method localizes and segments semantically aligned anatomical regions across different images, thereby implicitly constructing spatial correspondences. This is the first study to uncover an intrinsic link between natural language prompting and cross-image spatial consistency. In inter-subject prostate MRI registration, our approach surpasses state-of-the-art unsupervised methods and matches the performance of weakly supervised alternatives. Qualitative analysis demonstrates strong spatial invariance of prompt-driven segmentation and reveals distinct linguistic representations for local versus global correspondences.
📝 Abstract
Spatial correspondence can be represented by pairs of segmented regions, such that the image registration networks aim to segment corresponding regions rather than predicting displacement fields or transformation parameters. In this work, we show that such a corresponding region pair can be predicted by the same language prompt on two different images using the pre-trained large multimodal models based on GroundingDINO and SAM. This enables a fully automated and training-free registration algorithm, potentially generalisable to a wide range of image registration tasks. In this paper, we present experimental results using one of the challenging tasks, registering inter-subject prostate MR images, which involves both highly variable intensity and morphology between patients. Tell2Reg is training-free, eliminating the need for costly and time-consuming data curation and labelling that was previously required for this registration task. This approach outperforms unsupervised learning-based registration methods tested, and has a performance comparable to weakly-supervised methods. Additional qualitative results are also presented to suggest that, for the first time, there is a potential correlation between language semantics and spatial correspondence, including the spatial invariance in language-prompted regions and the difference in language prompts between the obtained local and global correspondences. Code is available at https://github.com/yanwenCi/Tell2Reg.git.