Neural Descriptors: Self-Supervised Learning of Robust Local Surface Descriptors Using Polynomial Patches

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional shape descriptors (e.g., HKS, WKS, SHOT) suffer from sensitivity to mesh connectivity, non-uniform sampling, and topological noise; while differential invariants from differential geometry offer theoretical robustness, they exhibit numerical instability on discrete meshes. To address this, we propose the first framework that deeply integrates differential invariant principles with self-supervised learning: synthetic data is generated via polynomial surface patch modeling; a differentiable geometric feature encoder is designed; and a curvature-field-driven contrastive loss is introduced to learn local geometric representations that are sampling-invariant and topologically robust. The method requires no manual annotations and natively supports partial matching, holes, and non-manifold edges. Evaluated on FAUST, SCAPE, TOPKIDS, and SHREC’16 benchmarks, it significantly outperforms HKS, WKS, and SHOT—particularly under topological corruption and sparse sampling, where correspondence accuracy improves markedly.

Technology Category

Application Category

📝 Abstract
Classical shape descriptors such as Heat Kernel Signature (HKS), Wave Kernel Signature (WKS), and Signature of Histograms of OrienTations (SHOT), while widely used in shape analysis, exhibit sensitivity to mesh connectivity, sampling patterns, and topological noise. While differential geometry offers a promising alternative through its theory of differential invariants, which are theoretically guaranteed to be robust shape descriptors, the computation of these invariants on discrete meshes often leads to unstable numerical approximations, limiting their practical utility. We present a self-supervised learning approach for extracting geometric features from 3D surfaces. Our method combines synthetic data generation with a neural architecture designed to learn sampling-invariant features. By integrating our features into existing shape correspondence frameworks, we demonstrate improved performance on standard benchmarks including FAUST, SCAPE, TOPKIDS, and SHREC'16, showing particular robustness to topological noise and partial shapes.
Problem

Research questions and friction points this paper is trying to address.

Classical shape descriptors are sensitive to mesh connectivity and noise.
Differential invariants are unstable on discrete meshes, limiting practical use.
Self-supervised learning extracts robust, sampling-invariant 3D surface features.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised learning for 3D surface features
Neural architecture for sampling-invariant descriptors
Synthetic data integration for robust shape analysis
🔎 Similar Papers
No similar papers found.