🤖 AI Summary
To address the computational inefficiency caused by redundant ray sampling in high-resolution rendering of generalizable NeRFs, this paper proposes ENeRF, a depth-guided efficient sampling framework. Our method introduces three key innovations: (1) depth-guided beam sampling, which clusters spatially adjacent rays into beams and shares implicit representations across them; (2) adaptive hierarchical sampling based on depth confidence, dynamically optimizing sample density along each ray; and (3) a lightweight implicit feature-sharing decoding mechanism. Evaluated on the DTU dataset, ENeRF achieves a 1.27 dB PSNR improvement and a 47% increase in FPS over prior methods. Across both synthetic and real-world scenes, it renders at twice the speed of current state-of-the-art generalizable NeRF approaches while maintaining competitive reconstruction quality.
📝 Abstract
Recent advancements in generalizable novel view synthesis have achieved impressive quality through interpolation between nearby views. However, rendering high-resolution images remains computationally intensive due to the need for dense sampling of all rays. Recognizing that natural scenes are typically piecewise smooth and sampling all rays is often redundant, we propose a novel depth-guided bundle sampling strategy to accelerate rendering. By grouping adjacent rays into a bundle and sampling them collectively, a shared representation is generated for decoding all rays within the bundle. To further optimize efficiency, our adaptive sampling strategy dynamically allocates samples based on depth confidence, concentrating more samples in complex regions while reducing them in smoother areas. When applied to ENeRF, our method achieves up to a 1.27 dB PSNR improvement and a 47% increase in FPS on the DTU dataset. Extensive experiments on synthetic and real-world datasets demonstrate state-of-the-art rendering quality and up to 2x faster rendering compared to existing generalizable methods. Code is available at https://github.com/KLMAV-CUC/GDB-NeRF.