🤖 AI Summary
This paper addresses the low accuracy and poor robustness of cross-modal registration between 2D images and 3D LiDAR point clouds in autonomous driving and robotics. Methodologically, it proposes an end-to-end registration framework grounded in raw edge structures: (1) it introduces a novel cross-modal correspondence modeling paradigm based on edge points/pixels as atomic units, circumventing feature distortion from dimensionality reduction; (2) it designs an attention-driven feature exchange module to explicitly mitigate modality heterogeneity; and (3) it incorporates an optimal matching layer to enhance outlier rejection and correspondence robustness. Evaluated on KITTI and nuScenes benchmarks, the method achieves state-of-the-art performance—significantly outperforming existing approaches relying on handcrafted feature matching or generic feature learning—while maintaining high precision, strong robustness, and real-time efficiency.
📝 Abstract
Cross-modal data registration has long been a critical task in computer vision, with extensive applications in autonomous driving and robotics. Accurate and robust registration methods are essential for aligning data from different modalities, forming the foundation for multimodal sensor data fusion and enhancing perception systems' accuracy and reliability. The registration task between 2D images captured by cameras and 3D point clouds captured by Light Detection and Ranging (LiDAR) sensors is usually treated as a visual pose estimation problem. High-dimensional feature similarities from different modalities are leveraged to identify pixel-point correspondences, followed by pose estimation techniques using least squares methods. However, existing approaches often resort to downsampling the original point cloud and image data due to computational constraints, inevitably leading to a loss in precision. Additionally, high-dimensional features extracted using different feature extractors from various modalities require specific techniques to mitigate cross-modal differences for effective matching. To address these challenges, we propose a method that uses edge information from the original point clouds and images for cross-modal registration. We retain crucial information from the original data by extracting edge points and pixels, enhancing registration accuracy while maintaining computational efficiency. The use of edge points and edge pixels allows us to introduce an attention-based feature exchange block to eliminate cross-modal disparities. Furthermore, we incorporate an optimal matching layer to improve correspondence identification. We validate the accuracy of our method on the KITTI and nuScenes datasets, demonstrating its state-of-the-art performance.