Correspondence-Free Multiview Point Cloud Registration via Depth-Guided Joint Optimisation

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-view point cloud registration often fails to converge to the global optimum in complex scenes due to its reliance on explicit feature matching and hand-crafted data association. To address this, we propose a correspondence-free, depth-map-guided joint optimization framework: the global map is parameterized as a differentiable depth map, and camera poses of all frames and the depth map structure are jointly optimized within a nonlinear least-squares formulation, enabling implicit and dynamic data association. Our method eliminates conventional feature extraction and explicit correspondence estimation, instead leveraging raw depth observations as supervision for end-to-end consistent 3D reconstruction. Experiments on real-world datasets demonstrate that our approach significantly outperforms existing state-of-the-art methods, achieving substantial improvements in both registration accuracy and robustness—particularly under challenging conditions such as textureless regions, motion blur, and large viewpoint variations.

Technology Category

Application Category

📝 Abstract
Multiview point cloud registration is a fundamental task for constructing globally consistent 3D models. Existing approaches typically rely on feature extraction and data association across multiple point clouds; however, these processes are challenging to obtain global optimal solution in complex environments. In this paper, we introduce a novel correspondence-free multiview point cloud registration method. Specifically, we represent the global map as a depth map and leverage raw depth information to formulate a non-linear least squares optimisation that jointly estimates poses of point clouds and the global map. Unlike traditional feature-based bundle adjustment methods, which rely on explicit feature extraction and data association, our method bypasses these challenges by associating multi-frame point clouds with a global depth map through their corresponding poses. This data association is implicitly incorporated and dynamically refined during the optimisation process. Extensive evaluations on real-world datasets demonstrate that our method outperforms state-of-the-art approaches in accuracy, particularly in challenging environments where feature extraction and data association are difficult.
Problem

Research questions and friction points this paper is trying to address.

Multiview point cloud registration without correspondence
Joint optimization of poses and global depth map
Overcoming feature extraction challenges in complex environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Correspondence-free registration via depth-guided optimization
Joint estimation of poses and global depth map
Implicit dynamic refinement during optimization