🤖 AI Summary
High-fidelity surface reconstruction from irregular point clouds—especially under weak geometric priors—remains challenging. Method: This paper proposes an implicit self-supervised attention prior. Its core innovation is the first self-supervised implicit attention mechanism requiring no external data: it jointly optimizes a learnable embedding dictionary with an implicit neural distance field, and models repetitive structures and long-range dependencies via cross-attention. Dense point coordinates and normals are analytically computed via automatic differentiation and integrated into a robust implicit moving least squares (RIMLS) framework, balancing detail fidelity and sparse-region regularization. Results: Extensive experiments demonstrate significant improvements over both classical and state-of-the-art learning-based methods under severe degradations—including sparse sampling, heavy noise, and non-uniform distribution—achieving new SOTA performance in reconstruction accuracy, geometric detail recovery, and robustness.
📝 Abstract
Recovering high-quality surfaces from irregular point cloud is ill-posed unless strong geometric priors are available. We introduce an implicit self-prior approach that distills a shape-specific prior directly from the input point cloud itself and embeds it within an implicit neural representation. This is achieved by jointly training a small dictionary of learnable embeddings with an implicit distance field; at every query location, the field attends to the dictionary via cross-attention, enabling the network to capture and reuse repeating structures and long-range correlations inherent to the shape. Optimized solely with self-supervised point cloud reconstruction losses, our approach requires no external training data. To effectively integrate this learned prior while preserving input fidelity, the trained field is then sampled to extract densely distributed points and analytic normals via automatic differentiation. We integrate the resulting dense point cloud and corresponding normals into a robust implicit moving least squares (RIMLS) formulation. We show this hybrid strategy preserves fine geometric details in the input data, while leveraging the learned prior to regularize sparse regions. Experiments show that our method outperforms both classical and learning-based approaches in generating high-fidelity surfaces with superior detail preservation and robustness to common data degradations.