Attention-guided reference point shifting for Gaussian-mixture-based partial point set registration

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient translation/rotation invariance of deep learning–Gaussian mixture model (GMM) hybrid methods (e.g., DeepGMR) for partial-to-partial point cloud registration. We propose an Attention-guided Reference Point Self-adaptation (ARPS) mechanism that eliminates reliance on overlapping-region correspondence. Instead, an attention module dynamically identifies common reference points across two point clouds and integrates them into the GMM framework to enforce feature consistency. The resulting ARPS layer is plug-and-play, enhancing existing GMM-based registrars (e.g., DeepGMR, UGMMReg) with improved robustness and accuracy under rigid transformations. Extensive evaluation on multiple benchmarks demonstrates that our method surpasses state-of-the-art approaches leveraging Transformers or attention blocks for common-region extraction, validating the effectiveness and generalizability of transformation-invariant feature modeling.

Technology Category

Application Category

📝 Abstract
This study investigates the impact of the invariance of feature vectors for partial-to-partial point set registration under translation and rotation of input point sets, particularly in the realm of techniques based on deep learning and Gaussian mixture models (GMMs). We reveal both theoretical and practical problems associated with such deep-learning-based registration methods using GMMs, with a particular focus on the limitations of DeepGMR, a pioneering study in this line, to the partial-to-partial point set registration. Our primary goal is to uncover the causes behind such methods and propose a comprehensible solution for that. To address this, we introduce an attention-based reference point shifting (ARPS) layer, which robustly identifies a common reference point of two partial point sets, thereby acquiring transformation-invariant features. The ARPS layer employs a well-studied attention module to find a common reference point rather than the overlap region. Owing to this, it significantly enhances the performance of DeepGMR and its recent variant, UGMMReg. Furthermore, these extension models outperform even prior deep learning methods using attention blocks and Transformer to extract the overlap region or common reference points. We believe these findings provide deeper insights into registration methods using deep learning and GMMs.
Problem

Research questions and friction points this paper is trying to address.

Addresses invariance issues in partial point set registration
Enhances DeepGMR and UGMMReg with attention-based reference shifting
Proposes ARPS layer for robust common reference point identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention-based reference point shifting layer
Robust common reference point identification
Enhances DeepGMR and UGMMReg performance
🔎 Similar Papers
No similar papers found.
M
Mizuki Kikkawa
School of Engineering, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
Tatsuya Yatagawa
Tatsuya Yatagawa
Hitotsubashi University
Computer GraphicsGeometry ProcessingMachine LearningNondestructive Testing
Y
Y. Ohtake
School of Engineering, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
H
Hiromasa Suzuki
School of Engineering, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan