🤖 AI Summary
To address inaccurate cross-modal correspondences in image-to-point-cloud (I2P) registration caused by inherent modality discrepancies, this paper proposes an end-to-end differentiable registration framework. Our method introduces three key innovations: (1) Control-Side Score Distillation, which transfers geometric priors from depth-conditioned diffusion models into the registration network; (2) a Deformable Correspondence Tuning module enabling fine-grained, differentiable feature matching; and (3) integration of a differentiable PnP solver to enable full-chain gradient backpropagation and joint optimization. Evaluated on the 7-Scenes benchmark, our approach achieves over a 7% improvement in registration recall over state-of-the-art methods, significantly enhancing both robustness and accuracy in cross-modal registration. This work establishes a novel paradigm for I2P registration by unifying generative modeling, correspondence learning, and geometric optimization within a fully differentiable pipeline.
📝 Abstract
Learning cross-modal correspondences is essential for image-to-point cloud (I2P) registration. Existing methods achieve this mostly by utilizing metric learning to enforce feature alignment across modalities, disregarding the inherent modality gap between image and point data. Consequently, this paradigm struggles to ensure accurate cross-modal correspondences. To this end, inspired by the cross-modal generation success of recent large diffusion models, we propose Diff$^2$I2P, a fully Differentiable I2P registration framework, leveraging a novel and effective Diffusion prior for bridging the modality gap. Specifically, we propose a Control-Side Score Distillation (CSD) technique to distill knowledge from a depth-conditioned diffusion model to directly optimize the predicted transformation. However, the gradients on the transformation fail to backpropagate onto the cross-modal features due to the non-differentiability of correspondence retrieval and PnP solver. To this end, we further propose a Deformable Correspondence Tuning (DCT) module to estimate the correspondences in a differentiable way, followed by the transformation estimation using a differentiable PnP solver. With these two designs, the Diffusion model serves as a strong prior to guide the cross-modal feature learning of image and point cloud for forming robust correspondences, which significantly improves the registration. Extensive experimental results demonstrate that Diff$^2$I2P consistently outperforms SoTA I2P registration methods, achieving over 7% improvement in registration recall on the 7-Scenes benchmark.