🤖 AI Summary
This work addresses generalizable 3D surface reconstruction from point cloud or 3D Gaussian inputs. We propose Ray-Element Distance Fields (REDF), a novel method that abandons conventional voxel-based or implicit field representations and instead directly predicts surface intersections along query rays. REDF comprises three key components: ray-element feature extraction, distance-field regression, and multi-ray-element fusion, enabling efficient, single-pass inference. Its core innovation lies in decoupling geometric reconstruction into ray-level local distance prediction, which significantly enhances cross-scene generalization. Evaluated on multiple real-world datasets, REDF achieves state-of-the-art accuracy for both point cloud and 3D Gaussian inputs—yielding more complete reconstructions with higher geometric fidelity—while requiring no fine-tuning for zero-shot transfer to unseen scenes.
📝 Abstract
In this paper, we present a generalizable method for 3D surface reconstruction from raw point clouds or pre-estimated 3D Gaussians by 3DGS from RGB images. Unlike existing coordinate-based methods which are often computationally intensive when rendering explicit surfaces, our proposed method, named RayletDF, introduces a new technique called raylet distance field, which aims to directly predict surface points from query rays. Our pipeline consists of three key modules: a raylet feature extractor, a raylet distance field predictor, and a multi-raylet blender. These components work together to extract fine-grained local geometric features, predict raylet distances, and aggregate multiple predictions to reconstruct precise surface points. We extensively evaluate our method on multiple public real-world datasets, demonstrating superior performance in surface reconstruction from point clouds or 3D Gaussians. Most notably, our method achieves exceptional generalization ability, successfully recovering 3D surfaces in a single-forward pass across unseen datasets in testing.