🤖 AI Summary
To address the substantial modality gap between second-harmonic generation (SHG) and bright-field (BF) microscopy images—and the limited performance of existing learning-based registration methods—this paper proposes a fidelity-constrained displacement editing framework. The method integrates batch-wise contrastive learning to enhance cross-modal feature consistency, employs a feature pyramid-based pre-alignment for coarse-grained initialization, and introduces a differentiable deformation field editing module coupled with instance-level gradient optimization to enable multi-scale deformation modeling and local refinement. Its core innovation is the first-ever fidelity constraint mechanism, which enforces physical plausibility of the deformation field while preserving anatomical structure integrity. Evaluated on the Learn2Reg COMULISglobe SHG-BF challenge, the method achieves top-ranked performance on the official leaderboard.
📝 Abstract
Co-examination of second-harmonic generation (SHG) and bright-field (BF) microscopy enables the differentiation of tissue components and collagen fibers, aiding the analysis of human breast and pancreatic cancer tissues. However, large discrepancies between SHG and BF images pose challenges for current learning-based registration models in aligning SHG to BF. In this paper, we propose a novel multi-modal registration framework that employs fidelity-imposed displacement editing to address these challenges. The framework integrates batch-wise contrastive learning, feature-based pre-alignment, and instance-level optimization. Experimental results from the Learn2Reg COMULISglobe SHG-BF Challenge validate the effectiveness of our method, securing the 1st place on the online leaderboard.