🤖 AI Summary
Existing neural radiance field (NeRF) approaches struggle to effectively model both aleatoric and epistemic uncertainties, limiting their deployment in safety-critical applications. This work proposes a probabilistic NeRF framework grounded in evidential deep learning that jointly estimates both types of uncertainty within a single forward pass, incurring no additional computational overhead. By seamlessly integrating volume rendering with a principled uncertainty quantification mechanism, the method achieves high-fidelity scene reconstruction while providing reliable confidence assessments. Experimental results demonstrate that the proposed model simultaneously attains state-of-the-art performance in both reconstruction quality and uncertainty estimation across three standard benchmarks.
📝 Abstract
Understanding sources of uncertainty is fundamental to trustworthy three-dimensional scene modeling. While recent advances in neural radiance fields (NeRFs) achieve impressive accuracy in scene reconstruction and novel view synthesis, the lack of uncertainty estimation significantly limits their deployment in safety-critical settings. Existing uncertainty quantification methods for NeRFs fail to capture both aleatoric and epistemic uncertainty. Among those that do quantify one or the other, many of them either compromise rendering quality or incur significant computational overhead to obtain uncertainty estimates. To address these issues, we introduce Evidential Neural Radiance Fields, a probabilistic approach that seamlessly integrates with the NeRF rendering process and enables direct quantification of both aleatoric and epistemic uncertainty from a single forward pass. We compare multiple uncertainty quantification methods on three standardized benchmarks, where our approach demonstrates state-of-the-art scene reconstruction fidelity and uncertainty estimation quality.