🤖 AI Summary
To address the high computational cost of SDF-based differentiable rendering and limited reconstruction accuracy in multi-view neural surface reconstruction, this paper proposes a lightweight light-field probe representation. Our method employs a physics-inspired minimal radiance parameterization—only four parameters per voxel plus a single-layer micro-MLP—to decouple spatial and angular radiance modeling. We further integrate a voxelized dual-resolution grid, a fully fused CUDA kernel, and joint optimization of the implicit SDF field and differentiable rendering. Evaluated on four real-world datasets, our approach achieves over 2× faster training, enables real-time rendering, and attains state-of-the-art performance in both surface reconstruction error (e.g., Chamfer distance) and image quality (PSNR). Notably, it is the first method to simultaneously improve geometric fidelity and appearance quality.
📝 Abstract
SDF-based differential rendering frameworks have achieved state-of-the-art multiview 3D shape reconstruction. In this work, we re-examine this family of approaches by minimally reformulating its core appearance model in a way that simultaneously yields faster computation and increased performance. To this goal, we exhibit a physically-inspired minimal radiance parametrization decoupling angular and spatial contributions, by encoding them with a small number of features stored in two respective volumetric grids of different resolutions. Requiring as little as four parameters per voxel, and a tiny MLP call inside a single fully fused kernel, our approach allows to enhance performance with both surface and image (PSNR) metrics, while providing a significant training speedup and real-time rendering. We show this performance to be consistently achieved on real data over two widely different and popular application fields, generic object and human subject shape reconstruction, using four representative and challenging datasets.