EdgeRegNet: Edge Feature-based Multimodal Registration Network between Images and LiDAR Point Clouds

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the low accuracy and poor robustness of cross-modal registration between 2D images and 3D LiDAR point clouds in autonomous driving and robotics. Methodologically, it proposes an end-to-end registration framework grounded in raw edge structures: (1) it introduces a novel cross-modal correspondence modeling paradigm based on edge points/pixels as atomic units, circumventing feature distortion from dimensionality reduction; (2) it designs an attention-driven feature exchange module to explicitly mitigate modality heterogeneity; and (3) it incorporates an optimal matching layer to enhance outlier rejection and correspondence robustness. Evaluated on KITTI and nuScenes benchmarks, the method achieves state-of-the-art performance—significantly outperforming existing approaches relying on handcrafted feature matching or generic feature learning—while maintaining high precision, strong robustness, and real-time efficiency.

Technology Category

Application Category

📝 Abstract
Cross-modal data registration has long been a critical task in computer vision, with extensive applications in autonomous driving and robotics. Accurate and robust registration methods are essential for aligning data from different modalities, forming the foundation for multimodal sensor data fusion and enhancing perception systems' accuracy and reliability. The registration task between 2D images captured by cameras and 3D point clouds captured by Light Detection and Ranging (LiDAR) sensors is usually treated as a visual pose estimation problem. High-dimensional feature similarities from different modalities are leveraged to identify pixel-point correspondences, followed by pose estimation techniques using least squares methods. However, existing approaches often resort to downsampling the original point cloud and image data due to computational constraints, inevitably leading to a loss in precision. Additionally, high-dimensional features extracted using different feature extractors from various modalities require specific techniques to mitigate cross-modal differences for effective matching. To address these challenges, we propose a method that uses edge information from the original point clouds and images for cross-modal registration. We retain crucial information from the original data by extracting edge points and pixels, enhancing registration accuracy while maintaining computational efficiency. The use of edge points and edge pixels allows us to introduce an attention-based feature exchange block to eliminate cross-modal disparities. Furthermore, we incorporate an optimal matching layer to improve correspondence identification. We validate the accuracy of our method on the KITTI and nuScenes datasets, demonstrating its state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Accurate registration between 2D images and 3D LiDAR point clouds
Overcoming computational constraints and precision loss in cross-modal registration
Mitigating cross-modal disparities for effective feature matching
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses edge points and pixels for registration
Introduces attention-based feature exchange block
Incorporates optimal matching layer for correspondence
Y
Yuanchao Yue
School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
H
Hui Yuan
School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
Q
Qinglong Miao
School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
X
Xiaolong Mao
School of software, Shandong University, Jinan, Shandong, China
Raouf Hamzaoui
Raouf Hamzaoui
De Montfort University
signal processingcommunication systems
Peter Eisert
Peter Eisert
Professor Visual Computing, Humboldt University Berlin, Fraunhofer HHI
3d video analysis and synthesisvisiongraphics