🤖 AI Summary
To address the high computational and memory overhead of curvature regularization in neural signed distance field (SDF) learning—stemming from reliance on second-order automatic differentiation—this paper proposes a lightweight finite-difference-based regularization framework. We introduce, for the first time, an O(h²)-accurate finite-difference stencil for explicit SDF curvature modeling, bypassing Hessian construction and second-order gradients entirely. The method enables plug-and-play approximations of both Gaussian curvature and rank-deficiency loss. Empirically, it matches the reconstruction accuracy of automatic-differentiation-based curvature regularization while reducing GPU memory consumption and training time by up to 50%. Moreover, it demonstrates strong robustness to sparse, incomplete, and non-CAD data. Our core contribution is achieving high-fidelity geometric regularization at the cost of only low-order differentiation, thereby significantly improving the efficiency and scalability of neural SDF learning.
📝 Abstract
We introduce a finite-difference framework for curvature regularization in neural signed distance field (SDF) learning. Existing approaches enforce curvature priors using full Hessian information obtained via second-order automatic differentiation, which is accurate but computationally expensive. Others reduced this overhead by avoiding explicit Hessian assembly, but still required higher-order differentiation. In contrast, our method replaces these operations with lightweight finite-difference stencils that approximate second derivatives using the well known Taylor expansion with a truncation error of O(h^2), and can serve as drop-in replacements for Gaussian curvature and rank-deficiency losses. Experiments demonstrate that our finite-difference variants achieve reconstruction fidelity comparable to their automatic-differentiation counterparts, while reducing GPU memory usage and training time by up to a factor of two. Additional tests on sparse, incomplete, and non-CAD data confirm that the proposed formulation is robust and general, offering an efficient and scalable alternative for curvature-aware SDF learning.