🤖 AI Summary
To address inefficient ray sampling and insufficient reconstruction fidelity for foreground implicit surfaces in Neural Radiance Fields (NeRF), this paper proposes an adaptive sampling framework explicitly targeting foreground implicit surfaces. The method models a differentiable probability density function (PDF) directly in the image projection space to guide dense ray sampling within regions of interest. Furthermore, it introduces a novel surface reconstruction loss that jointly optimizes the implicit surface and radiance field by integrating near-surface geometric priors with free-space constraints. Crucially, the approach requires no additional supervision or pretraining and consistently improves mainstream NeRF variants. It significantly enhances geometric accuracy and detail fidelity in target regions while reducing redundant sampling overhead. Experiments demonstrate substantial gains across standard metrics—PSNR, SSIM, and LPIPS—particularly in complex scenes.
📝 Abstract
Several variants of Neural Radiance Fields (NeRFs) have significantly improved the accuracy of synthesized images and surface reconstruction of 3D scenes/objects. In all of these methods, a key characteristic is that none can train the neural network with every possible input data, specifically, every pixel and potential 3D point along the projection rays due to scalability issues. While vanilla NeRFs uniformly sample both the image pixels and 3D points along the projection rays, some variants focus only on guiding the sampling of the 3D points along the projection rays. In this paper, we leverage the implicit surface representation of the foreground scene and model a probability density function in a 3D image projection space to achieve a more targeted sampling of the rays toward regions of interest, resulting in improved rendering. Additionally, a new surface reconstruction loss is proposed for improved performance. This new loss fully explores the proposed 3D image projection space model and incorporates near-to-surface and empty space components. By integrating our novel sampling strategy and novel loss into current state-of-the-art neural implicit surface renderers, we achieve more accurate and detailed 3D reconstructions and improved image rendering, especially for the regions of interest in any given scene.