RayletDF: Raylet Distance Fields for Generalizable 3D Surface Reconstruction from Point Clouds or Gaussians

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses generalizable 3D surface reconstruction from point cloud or 3D Gaussian inputs. We propose Ray-Element Distance Fields (REDF), a novel method that abandons conventional voxel-based or implicit field representations and instead directly predicts surface intersections along query rays. REDF comprises three key components: ray-element feature extraction, distance-field regression, and multi-ray-element fusion, enabling efficient, single-pass inference. Its core innovation lies in decoupling geometric reconstruction into ray-level local distance prediction, which significantly enhances cross-scene generalization. Evaluated on multiple real-world datasets, REDF achieves state-of-the-art accuracy for both point cloud and 3D Gaussian inputs—yielding more complete reconstructions with higher geometric fidelity—while requiring no fine-tuning for zero-shot transfer to unseen scenes.

Technology Category

Application Category

📝 Abstract
In this paper, we present a generalizable method for 3D surface reconstruction from raw point clouds or pre-estimated 3D Gaussians by 3DGS from RGB images. Unlike existing coordinate-based methods which are often computationally intensive when rendering explicit surfaces, our proposed method, named RayletDF, introduces a new technique called raylet distance field, which aims to directly predict surface points from query rays. Our pipeline consists of three key modules: a raylet feature extractor, a raylet distance field predictor, and a multi-raylet blender. These components work together to extract fine-grained local geometric features, predict raylet distances, and aggregate multiple predictions to reconstruct precise surface points. We extensively evaluate our method on multiple public real-world datasets, demonstrating superior performance in surface reconstruction from point clouds or 3D Gaussians. Most notably, our method achieves exceptional generalization ability, successfully recovering 3D surfaces in a single-forward pass across unseen datasets in testing.
Problem

Research questions and friction points this paper is trying to address.

Generalizable 3D surface reconstruction from point clouds
Direct surface prediction from query rays
Overcoming computational intensity in explicit surface rendering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Raylet distance field for surface prediction
Three-module pipeline for feature extraction
Generalizable single-forward pass reconstruction
Shenxing Wei
Shenxing Wei
HongKong Polytechnic University
Computer vision
Jinxi Li
Jinxi Li
PhD candidate, The Hong Kong Polytechnic University
3d visiondynamic reconstructionspatial-temporal learning
Y
Yafei Yang
vLAR Group, The Hong Kong Polytechnic University
S
Siyuan Zhou
vLAR Group, The Hong Kong Polytechnic University
B
Bo Yang
vLAR Group, The Hong Kong Polytechnic University