š¤ AI Summary
This work addresses the challenge of erroneous 2Dā3D correspondences in scenes with repetitive textures, which often arise from missing or inconsistent geometric structures. To mitigate this issue, the authors propose a registration method that integrates local geometric enhancement with graph-based distributional consistency. Specifically, a normal-guided Local Geometric Enhancement (LGE) module is introduced to inject geometric cues into image features, while a Graph Distribution Consistency (GDC) module leverages graph neural networks to model matching relationships and explicitly enforce consistency in similarity distributions. Evaluated on the RGB-D Scenes v2 and 7-Scenes datasets, the proposed approach significantly outperforms existing methods, achieving state-of-the-art performance in 2Dā3D registration accuracy.
š Abstract
Image-to-point cloud registration methods typically follow a coarse-to-fine pipeline, extracting patch-level correspondences and refining them into dense pixel-to-point matches. However, in scenes with repetitive patterns, images often lack sufficient 3D structural cues and alignment with point clouds, leading to incorrect matches. Moreover, prior methods usually overlook structural consistency, limiting the full exploitation of correspondences. To address these issues, we propose two novel modules: the Local Geometry Enhancement (LGE) module and the Graph Distribution Consistency (GDC) module. LGE enhances both image and point cloud features with normal vectors, injecting geometric structure into image features to reduce mismatches. GDC constructs a graph from matched points to update features and explicitly constrain similarity distributions. Extensive experiments and ablations on two benchmarks, RGB-D Scenes v2 and 7-Scenes, demonstrate that our approach achieves state-of-the-art performance in image-to-point cloud registration.