🤖 AI Summary
Traditional unsigned distance field (UDF) learning relies on large-scale 3D shape datasets, incurs high training costs, and suffers from poor generalization. To address these limitations, this paper proposes LoSF-UDF—a lightweight, shape-agnostic UDF learning framework that reconstructs surfaces directly from single-frame point clouds. Its core contributions are: (1) a novel mathematically grounded local shape function (LoSF) patch synthesis method to construct a compact, unsupervised training set; (2) a local query-radius-aware feature learning module with attention-based weighting; and (3) an ultra-lightweight architecture with only 653 KB parameters, enabling strong cross-shape generalization. Experiments demonstrate that LoSF-UDF significantly outperforms state-of-the-art methods on both synthetic and real-world scanned point clouds, exhibits robust noise resilience, requires merely a 0.5 GB training set, and achieves millisecond-level inference—providing high-quality initialization for downstream iterative optimization.
📝 Abstract
Unsigned distance fields (UDFs) provide a versatile framework for representing a diverse array of 3D shapes, encompassing both watertight and non-watertight geometries. Traditional UDF learning methods typically require extensive training on large 3D shape datasets, which is costly and necessitates re-training for new datasets. This paper presents a novel neural framework, LoSF-UDF, for reconstructing surfaces from 3D point clouds by leveraging local shape functions to learn UDFs. We observe that 3D shapes manifest simple patterns in localized regions, prompting us to develop a training dataset of point cloud patches characterized by mathematical functions that represent a continuum from smooth surfaces to sharp edges and corners. Our approach learns features within a specific radius around each query point and utilizes an attention mechanism to focus on the crucial features for UDF estimation. Despite being highly lightweight, with only 653 KB of trainable parameters and a modest-sized training dataset with 0.5 GB storage, our method enables efficient and robust surface reconstruction from point clouds without requiring for shape-specific training. Furthermore, our method exhibits enhanced resilience to noise and outliers in point clouds compared to existing methods. We conduct comprehensive experiments and comparisons across various datasets, including synthetic and real-scanned point clouds, to validate our method's efficacy. Notably, our lightweight framework offers rapid and reliable initialization for other unsupervised iterative approaches, improving both the efficiency and accuracy of their reconstructions. Our project and code are available at https://jbhu67.github.io/LoSF-UDF.github.io.